Test Report: Docker_Linux_containerd_arm64 22081

                    
                      502ebf1e50e408071a7e5daf27f82abd53674654:2025-12-09:42698
                    
                

Test fail (34/369)

Order failed test Duration
171 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy 500.7
173 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart 367.82
175 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods 2.22
185 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd 2.27
186 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly 2.21
187 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig 736.43
188 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth 2.08
191 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService 0.06
194 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd 1.76
197 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd 2.27
201 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect 2.34
203 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim 241.64
213 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels 2.18
219 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel 0.57
224 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup 0.09
225 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect 111.35
242 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port 2.39
249 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp 0.06
250 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List 0.27
251 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput 0.26
252 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS 0.26
253 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format 0.28
254 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL 0.26
358 TestKubernetesUpgrade 796.41
415 TestStartStop/group/no-preload/serial/FirstStart 515.83
437 TestStartStop/group/newest-cni/serial/FirstStart 503.74
438 TestStartStop/group/no-preload/serial/DeployApp 3.13
439 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 89.69
442 TestStartStop/group/no-preload/serial/SecondStart 370.05
444 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 102.12
447 TestStartStop/group/newest-cni/serial/SecondStart 374.48
448 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 542.23
452 TestStartStop/group/newest-cni/serial/Pause 9.14
467 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 279.9
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (500.7s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-667319 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
E1209 04:18:50.603627 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/addons-221952/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 04:21:06.737420 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/addons-221952/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 04:21:34.450441 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/addons-221952/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 04:22:38.986675 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-717497/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 04:22:38.993144 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-717497/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 04:22:39.004963 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-717497/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 04:22:39.026903 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-717497/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 04:22:39.069272 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-717497/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 04:22:39.150853 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-717497/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 04:22:39.312492 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-717497/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 04:22:39.634315 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-717497/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 04:22:40.276452 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-717497/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 04:22:41.558073 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-717497/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 04:22:44.119538 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-717497/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 04:22:49.241264 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-717497/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 04:22:59.483597 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-717497/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 04:23:19.965625 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-717497/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 04:24:00.928732 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-717497/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 04:25:22.853780 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-717497/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 04:26:06.736206 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/addons-221952/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-667319 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: exit status 109 (8m19.275785744s)

                                                
                                                
-- stdout --
	* [functional-667319] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22081
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22081-1142328/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-1142328/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "functional-667319" primary control-plane node in "functional-667319" cluster
	* Pulling base image v0.0.48-1765184860-22066 ...
	* Found network options:
	  - HTTP_PROXY=localhost:34739
	* Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
	* Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Local proxy ignored: not passing HTTP_PROXY=localhost:34739 to docker env.
	! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [functional-667319 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [functional-667319 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001146574s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00013653s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00013653s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Related issue: https://github.com/kubernetes/minikube/issues/4172

                                                
                                                
** /stderr **
functional_test.go:2241: failed minikube start. args "out/minikube-linux-arm64 start -p functional-667319 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-667319
helpers_test.go:243: (dbg) docker inspect functional-667319:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e5b6511799c8d5c445a335a3bd5cc9a61b518fc27ac93dad8800da366ef32129",
	        "Created": "2025-12-09T04:18:34.060957311Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1182075,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-09T04:18:34.126944158Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:e4eb91ed18a24161fce60c7cdd660144ecd5b8c5029dc2dea2c5e423c2f48ce4",
	        "ResolvConfPath": "/var/lib/docker/containers/e5b6511799c8d5c445a335a3bd5cc9a61b518fc27ac93dad8800da366ef32129/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e5b6511799c8d5c445a335a3bd5cc9a61b518fc27ac93dad8800da366ef32129/hostname",
	        "HostsPath": "/var/lib/docker/containers/e5b6511799c8d5c445a335a3bd5cc9a61b518fc27ac93dad8800da366ef32129/hosts",
	        "LogPath": "/var/lib/docker/containers/e5b6511799c8d5c445a335a3bd5cc9a61b518fc27ac93dad8800da366ef32129/e5b6511799c8d5c445a335a3bd5cc9a61b518fc27ac93dad8800da366ef32129-json.log",
	        "Name": "/functional-667319",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-667319:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-667319",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e5b6511799c8d5c445a335a3bd5cc9a61b518fc27ac93dad8800da366ef32129",
	                "LowerDir": "/var/lib/docker/overlay2/b0239006282b6e4609a1f554d0a3fb94c749a13505795c8e4078cb2db194e8e0-init/diff:/var/lib/docker/overlay2/c44bb57aa59cc265266f37f2bb6e7ec0e7d641c3b4aeaa57e6d23deec6f0d1d4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b0239006282b6e4609a1f554d0a3fb94c749a13505795c8e4078cb2db194e8e0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b0239006282b6e4609a1f554d0a3fb94c749a13505795c8e4078cb2db194e8e0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b0239006282b6e4609a1f554d0a3fb94c749a13505795c8e4078cb2db194e8e0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-667319",
	                "Source": "/var/lib/docker/volumes/functional-667319/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-667319",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-667319",
	                "name.minikube.sigs.k8s.io": "functional-667319",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7c81dabcd9e57af9bce0bc0f5619f6ef3a27af43f4b649283a5bd778ab256415",
	            "SandboxKey": "/var/run/docker/netns/7c81dabcd9e5",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33900"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33901"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33904"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33902"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33903"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-667319": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "fe:40:bd:46:56:d8",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "88b3a65de70c15005c532a44219284d4df94e474ca5b78b04514c2f932b03beb",
	                    "EndpointID": "bdef7b156f4a28c1f641ae70b42db2750bb810ae6fe93fd65325e62eb232fe91",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-667319",
	                        "e5b6511799c8"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-667319 -n functional-667319
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-667319 -n functional-667319: exit status 6 (336.854827ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1209 04:26:48.781296 1187132 status.go:458] kubeconfig endpoint: get endpoint: "functional-667319" does not appear in /home/jenkins/minikube-integration/22081-1142328/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-667319 logs -n 25
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                              ARGS                                                                               │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-717497 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                                        │ functional-717497 │ jenkins │ v1.37.0 │ 09 Dec 25 04:18 UTC │ 09 Dec 25 04:18 UTC │
	│ ssh            │ functional-717497 ssh sudo cat /etc/ssl/certs/11442312.pem                                                                                                      │ functional-717497 │ jenkins │ v1.37.0 │ 09 Dec 25 04:18 UTC │ 09 Dec 25 04:18 UTC │
	│ image          │ functional-717497 image load --daemon kicbase/echo-server:functional-717497 --alsologtostderr                                                                   │ functional-717497 │ jenkins │ v1.37.0 │ 09 Dec 25 04:18 UTC │ 09 Dec 25 04:18 UTC │
	│ ssh            │ functional-717497 ssh sudo cat /usr/share/ca-certificates/11442312.pem                                                                                          │ functional-717497 │ jenkins │ v1.37.0 │ 09 Dec 25 04:18 UTC │ 09 Dec 25 04:18 UTC │
	│ ssh            │ functional-717497 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                        │ functional-717497 │ jenkins │ v1.37.0 │ 09 Dec 25 04:18 UTC │ 09 Dec 25 04:18 UTC │
	│ image          │ functional-717497 image ls                                                                                                                                      │ functional-717497 │ jenkins │ v1.37.0 │ 09 Dec 25 04:18 UTC │ 09 Dec 25 04:18 UTC │
	│ ssh            │ functional-717497 ssh sudo cat /etc/test/nested/copy/1144231/hosts                                                                                              │ functional-717497 │ jenkins │ v1.37.0 │ 09 Dec 25 04:18 UTC │ 09 Dec 25 04:18 UTC │
	│ image          │ functional-717497 image save kicbase/echo-server:functional-717497 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr │ functional-717497 │ jenkins │ v1.37.0 │ 09 Dec 25 04:18 UTC │ 09 Dec 25 04:18 UTC │
	│ image          │ functional-717497 image rm kicbase/echo-server:functional-717497 --alsologtostderr                                                                              │ functional-717497 │ jenkins │ v1.37.0 │ 09 Dec 25 04:18 UTC │ 09 Dec 25 04:18 UTC │
	│ image          │ functional-717497 image ls                                                                                                                                      │ functional-717497 │ jenkins │ v1.37.0 │ 09 Dec 25 04:18 UTC │ 09 Dec 25 04:18 UTC │
	│ image          │ functional-717497 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr                                       │ functional-717497 │ jenkins │ v1.37.0 │ 09 Dec 25 04:18 UTC │ 09 Dec 25 04:18 UTC │
	│ image          │ functional-717497 image ls                                                                                                                                      │ functional-717497 │ jenkins │ v1.37.0 │ 09 Dec 25 04:18 UTC │ 09 Dec 25 04:18 UTC │
	│ update-context │ functional-717497 update-context --alsologtostderr -v=2                                                                                                         │ functional-717497 │ jenkins │ v1.37.0 │ 09 Dec 25 04:18 UTC │ 09 Dec 25 04:18 UTC │
	│ image          │ functional-717497 image save --daemon kicbase/echo-server:functional-717497 --alsologtostderr                                                                   │ functional-717497 │ jenkins │ v1.37.0 │ 09 Dec 25 04:18 UTC │ 09 Dec 25 04:18 UTC │
	│ update-context │ functional-717497 update-context --alsologtostderr -v=2                                                                                                         │ functional-717497 │ jenkins │ v1.37.0 │ 09 Dec 25 04:18 UTC │ 09 Dec 25 04:18 UTC │
	│ update-context │ functional-717497 update-context --alsologtostderr -v=2                                                                                                         │ functional-717497 │ jenkins │ v1.37.0 │ 09 Dec 25 04:18 UTC │ 09 Dec 25 04:18 UTC │
	│ image          │ functional-717497 image ls --format short --alsologtostderr                                                                                                     │ functional-717497 │ jenkins │ v1.37.0 │ 09 Dec 25 04:18 UTC │ 09 Dec 25 04:18 UTC │
	│ image          │ functional-717497 image ls --format yaml --alsologtostderr                                                                                                      │ functional-717497 │ jenkins │ v1.37.0 │ 09 Dec 25 04:18 UTC │ 09 Dec 25 04:18 UTC │
	│ ssh            │ functional-717497 ssh pgrep buildkitd                                                                                                                           │ functional-717497 │ jenkins │ v1.37.0 │ 09 Dec 25 04:18 UTC │                     │
	│ image          │ functional-717497 image ls --format json --alsologtostderr                                                                                                      │ functional-717497 │ jenkins │ v1.37.0 │ 09 Dec 25 04:18 UTC │ 09 Dec 25 04:18 UTC │
	│ image          │ functional-717497 image build -t localhost/my-image:functional-717497 testdata/build --alsologtostderr                                                          │ functional-717497 │ jenkins │ v1.37.0 │ 09 Dec 25 04:18 UTC │ 09 Dec 25 04:18 UTC │
	│ image          │ functional-717497 image ls --format table --alsologtostderr                                                                                                     │ functional-717497 │ jenkins │ v1.37.0 │ 09 Dec 25 04:18 UTC │ 09 Dec 25 04:18 UTC │
	│ image          │ functional-717497 image ls                                                                                                                                      │ functional-717497 │ jenkins │ v1.37.0 │ 09 Dec 25 04:18 UTC │ 09 Dec 25 04:18 UTC │
	│ delete         │ -p functional-717497                                                                                                                                            │ functional-717497 │ jenkins │ v1.37.0 │ 09 Dec 25 04:18 UTC │ 09 Dec 25 04:18 UTC │
	│ start          │ -p functional-667319 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0         │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:18 UTC │                     │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/09 04:18:29
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 04:18:29.204918 1181690 out.go:360] Setting OutFile to fd 1 ...
	I1209 04:18:29.205025 1181690 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 04:18:29.205029 1181690 out.go:374] Setting ErrFile to fd 2...
	I1209 04:18:29.205032 1181690 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 04:18:29.205273 1181690 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-1142328/.minikube/bin
	I1209 04:18:29.205655 1181690 out.go:368] Setting JSON to false
	I1209 04:18:29.206436 1181690 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":25233,"bootTime":1765228677,"procs":153,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1209 04:18:29.206487 1181690 start.go:143] virtualization:  
	I1209 04:18:29.210929 1181690 out.go:179] * [functional-667319] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1209 04:18:29.214568 1181690 out.go:179]   - MINIKUBE_LOCATION=22081
	I1209 04:18:29.214666 1181690 notify.go:221] Checking for updates...
	I1209 04:18:29.221130 1181690 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 04:18:29.224257 1181690 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22081-1142328/kubeconfig
	I1209 04:18:29.227482 1181690 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-1142328/.minikube
	I1209 04:18:29.230682 1181690 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1209 04:18:29.233815 1181690 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 04:18:29.237216 1181690 driver.go:422] Setting default libvirt URI to qemu:///system
	I1209 04:18:29.265797 1181690 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1209 04:18:29.265929 1181690 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 04:18:29.319216 1181690 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-12-09 04:18:29.310329495 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1209 04:18:29.319299 1181690 docker.go:319] overlay module found
	I1209 04:18:29.322534 1181690 out.go:179] * Using the docker driver based on user configuration
	I1209 04:18:29.325484 1181690 start.go:309] selected driver: docker
	I1209 04:18:29.325493 1181690 start.go:927] validating driver "docker" against <nil>
	I1209 04:18:29.325504 1181690 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 04:18:29.326243 1181690 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 04:18:29.381124 1181690 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-12-09 04:18:29.372526494 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1209 04:18:29.381280 1181690 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1209 04:18:29.381488 1181690 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 04:18:29.384407 1181690 out.go:179] * Using Docker driver with root privileges
	I1209 04:18:29.387349 1181690 cni.go:84] Creating CNI manager for ""
	I1209 04:18:29.387409 1181690 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1209 04:18:29.387416 1181690 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1209 04:18:29.387526 1181690 start.go:353] cluster config:
	{Name:functional-667319 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-667319 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 04:18:29.392539 1181690 out.go:179] * Starting "functional-667319" primary control-plane node in "functional-667319" cluster
	I1209 04:18:29.395408 1181690 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1209 04:18:29.398300 1181690 out.go:179] * Pulling base image v0.0.48-1765184860-22066 ...
	I1209 04:18:29.401199 1181690 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local docker daemon
	I1209 04:18:29.401296 1181690 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1209 04:18:29.401315 1181690 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1209 04:18:29.401323 1181690 cache.go:65] Caching tarball of preloaded images
	I1209 04:18:29.401420 1181690 preload.go:238] Found /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1209 04:18:29.401429 1181690 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1209 04:18:29.401767 1181690 profile.go:143] Saving config to /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/config.json ...
	I1209 04:18:29.401784 1181690 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/config.json: {Name:mk573ebc352f76a50b397be0f1c5137667ba678e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 04:18:29.421163 1181690 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local docker daemon, skipping pull
	I1209 04:18:29.421180 1181690 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c exists in daemon, skipping load
	I1209 04:18:29.421192 1181690 cache.go:243] Successfully downloaded all kic artifacts
	I1209 04:18:29.421226 1181690 start.go:360] acquireMachinesLock for functional-667319: {Name:mk6c31f0747796f5f8ac8ea1653d6ee60fe2a47d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 04:18:29.421344 1181690 start.go:364] duration metric: took 104.333µs to acquireMachinesLock for "functional-667319"
	I1209 04:18:29.421366 1181690 start.go:93] Provisioning new machine with config: &{Name:functional-667319 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-667319 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNS
Log:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1209 04:18:29.421428 1181690 start.go:125] createHost starting for "" (driver="docker")
	I1209 04:18:29.424719 1181690 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	W1209 04:18:29.425002 1181690 out.go:285] ! Local proxy ignored: not passing HTTP_PROXY=localhost:34739 to docker env.
	I1209 04:18:29.425028 1181690 start.go:159] libmachine.API.Create for "functional-667319" (driver="docker")
	I1209 04:18:29.425064 1181690 client.go:173] LocalClient.Create starting
	I1209 04:18:29.425120 1181690 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem
	I1209 04:18:29.425151 1181690 main.go:143] libmachine: Decoding PEM data...
	I1209 04:18:29.425169 1181690 main.go:143] libmachine: Parsing certificate...
	I1209 04:18:29.425225 1181690 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/cert.pem
	I1209 04:18:29.425264 1181690 main.go:143] libmachine: Decoding PEM data...
	I1209 04:18:29.425275 1181690 main.go:143] libmachine: Parsing certificate...
	I1209 04:18:29.425631 1181690 cli_runner.go:164] Run: docker network inspect functional-667319 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1209 04:18:29.441023 1181690 cli_runner.go:211] docker network inspect functional-667319 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1209 04:18:29.441094 1181690 network_create.go:284] running [docker network inspect functional-667319] to gather additional debugging logs...
	I1209 04:18:29.441110 1181690 cli_runner.go:164] Run: docker network inspect functional-667319
	W1209 04:18:29.457025 1181690 cli_runner.go:211] docker network inspect functional-667319 returned with exit code 1
	I1209 04:18:29.457053 1181690 network_create.go:287] error running [docker network inspect functional-667319]: docker network inspect functional-667319: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network functional-667319 not found
	I1209 04:18:29.457065 1181690 network_create.go:289] output of [docker network inspect functional-667319]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network functional-667319 not found
	
	** /stderr **
	I1209 04:18:29.457182 1181690 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1209 04:18:29.474026 1181690 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400186f5d0}
	I1209 04:18:29.474058 1181690 network_create.go:124] attempt to create docker network functional-667319 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1209 04:18:29.474113 1181690 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=functional-667319 functional-667319
	I1209 04:18:29.526052 1181690 network_create.go:108] docker network functional-667319 192.168.49.0/24 created
	I1209 04:18:29.526074 1181690 kic.go:121] calculated static IP "192.168.49.2" for the "functional-667319" container
	I1209 04:18:29.526144 1181690 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1209 04:18:29.540730 1181690 cli_runner.go:164] Run: docker volume create functional-667319 --label name.minikube.sigs.k8s.io=functional-667319 --label created_by.minikube.sigs.k8s.io=true
	I1209 04:18:29.558452 1181690 oci.go:103] Successfully created a docker volume functional-667319
	I1209 04:18:29.558532 1181690 cli_runner.go:164] Run: docker run --rm --name functional-667319-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-667319 --entrypoint /usr/bin/test -v functional-667319:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c -d /var/lib
	I1209 04:18:30.124549 1181690 oci.go:107] Successfully prepared a docker volume functional-667319
	I1209 04:18:30.124612 1181690 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1209 04:18:30.124621 1181690 kic.go:194] Starting extracting preloaded images to volume ...
	I1209 04:18:30.124697 1181690 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v functional-667319:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c -I lz4 -xf /preloaded.tar -C /extractDir
	I1209 04:18:33.987163 1181690 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v functional-667319:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c -I lz4 -xf /preloaded.tar -C /extractDir: (3.862430131s)
	I1209 04:18:33.987183 1181690 kic.go:203] duration metric: took 3.862559841s to extract preloaded images to volume ...
	W1209 04:18:33.987328 1181690 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1209 04:18:33.987422 1181690 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1209 04:18:34.045739 1181690 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname functional-667319 --name functional-667319 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-667319 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=functional-667319 --network functional-667319 --ip 192.168.49.2 --volume functional-667319:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8441 --publish=127.0.0.1::8441 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c
	I1209 04:18:34.328371 1181690 cli_runner.go:164] Run: docker container inspect functional-667319 --format={{.State.Running}}
	I1209 04:18:34.356856 1181690 cli_runner.go:164] Run: docker container inspect functional-667319 --format={{.State.Status}}
	I1209 04:18:34.385701 1181690 cli_runner.go:164] Run: docker exec functional-667319 stat /var/lib/dpkg/alternatives/iptables
	I1209 04:18:34.448336 1181690 oci.go:144] the created container "functional-667319" has a running status.
	I1209 04:18:34.448356 1181690 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22081-1142328/.minikube/machines/functional-667319/id_rsa...
	I1209 04:18:34.590892 1181690 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22081-1142328/.minikube/machines/functional-667319/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1209 04:18:34.616547 1181690 cli_runner.go:164] Run: docker container inspect functional-667319 --format={{.State.Status}}
	I1209 04:18:34.645902 1181690 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1209 04:18:34.645913 1181690 kic_runner.go:114] Args: [docker exec --privileged functional-667319 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1209 04:18:34.712746 1181690 cli_runner.go:164] Run: docker container inspect functional-667319 --format={{.State.Status}}
	I1209 04:18:34.739745 1181690 machine.go:94] provisionDockerMachine start ...
	I1209 04:18:34.739833 1181690 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-667319
	I1209 04:18:34.766446 1181690 main.go:143] libmachine: Using SSH client type: native
	I1209 04:18:34.766795 1181690 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 33900 <nil> <nil>}
	I1209 04:18:34.766802 1181690 main.go:143] libmachine: About to run SSH command:
	hostname
	I1209 04:18:34.767473 1181690 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:38220->127.0.0.1:33900: read: connection reset by peer
	I1209 04:18:37.919716 1181690 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-667319
	
	I1209 04:18:37.919734 1181690 ubuntu.go:182] provisioning hostname "functional-667319"
	I1209 04:18:37.919807 1181690 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-667319
	I1209 04:18:37.937658 1181690 main.go:143] libmachine: Using SSH client type: native
	I1209 04:18:37.937961 1181690 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 33900 <nil> <nil>}
	I1209 04:18:37.937969 1181690 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-667319 && echo "functional-667319" | sudo tee /etc/hostname
	I1209 04:18:38.105800 1181690 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-667319
	
	I1209 04:18:38.105874 1181690 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-667319
	I1209 04:18:38.124605 1181690 main.go:143] libmachine: Using SSH client type: native
	I1209 04:18:38.124916 1181690 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 33900 <nil> <nil>}
	I1209 04:18:38.124929 1181690 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-667319' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-667319/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-667319' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 04:18:38.276300 1181690 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1209 04:18:38.276316 1181690 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22081-1142328/.minikube CaCertPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22081-1142328/.minikube}
	I1209 04:18:38.276339 1181690 ubuntu.go:190] setting up certificates
	I1209 04:18:38.276347 1181690 provision.go:84] configureAuth start
	I1209 04:18:38.276407 1181690 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-667319
	I1209 04:18:38.298940 1181690 provision.go:143] copyHostCerts
	I1209 04:18:38.298994 1181690 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.pem, removing ...
	I1209 04:18:38.299001 1181690 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.pem
	I1209 04:18:38.299076 1181690 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.pem (1078 bytes)
	I1209 04:18:38.299224 1181690 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-1142328/.minikube/cert.pem, removing ...
	I1209 04:18:38.299229 1181690 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-1142328/.minikube/cert.pem
	I1209 04:18:38.299257 1181690 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22081-1142328/.minikube/cert.pem (1123 bytes)
	I1209 04:18:38.299307 1181690 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-1142328/.minikube/key.pem, removing ...
	I1209 04:18:38.299310 1181690 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-1142328/.minikube/key.pem
	I1209 04:18:38.299333 1181690 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22081-1142328/.minikube/key.pem (1675 bytes)
	I1209 04:18:38.299376 1181690 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca-key.pem org=jenkins.functional-667319 san=[127.0.0.1 192.168.49.2 functional-667319 localhost minikube]
	I1209 04:18:38.353979 1181690 provision.go:177] copyRemoteCerts
	I1209 04:18:38.354038 1181690 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 04:18:38.354078 1181690 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-667319
	I1209 04:18:38.371581 1181690 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/functional-667319/id_rsa Username:docker}
	I1209 04:18:38.475497 1181690 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1209 04:18:38.492003 1181690 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1209 04:18:38.509314 1181690 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1209 04:18:38.526298 1181690 provision.go:87] duration metric: took 249.929309ms to configureAuth
	I1209 04:18:38.526329 1181690 ubuntu.go:206] setting minikube options for container-runtime
	I1209 04:18:38.526505 1181690 config.go:182] Loaded profile config "functional-667319": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1209 04:18:38.526511 1181690 machine.go:97] duration metric: took 3.786756193s to provisionDockerMachine
	I1209 04:18:38.526516 1181690 client.go:176] duration metric: took 9.101448027s to LocalClient.Create
	I1209 04:18:38.526529 1181690 start.go:167] duration metric: took 9.101501868s to libmachine.API.Create "functional-667319"
	I1209 04:18:38.526535 1181690 start.go:293] postStartSetup for "functional-667319" (driver="docker")
	I1209 04:18:38.526544 1181690 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 04:18:38.526598 1181690 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 04:18:38.526633 1181690 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-667319
	I1209 04:18:38.547176 1181690 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/functional-667319/id_rsa Username:docker}
	I1209 04:18:38.651954 1181690 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 04:18:38.655038 1181690 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1209 04:18:38.655054 1181690 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1209 04:18:38.655065 1181690 filesync.go:126] Scanning /home/jenkins/minikube-integration/22081-1142328/.minikube/addons for local assets ...
	I1209 04:18:38.655118 1181690 filesync.go:126] Scanning /home/jenkins/minikube-integration/22081-1142328/.minikube/files for local assets ...
	I1209 04:18:38.655206 1181690 filesync.go:149] local asset: /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/ssl/certs/11442312.pem -> 11442312.pem in /etc/ssl/certs
	I1209 04:18:38.655285 1181690 filesync.go:149] local asset: /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/test/nested/copy/1144231/hosts -> hosts in /etc/test/nested/copy/1144231
	I1209 04:18:38.655328 1181690 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1144231
	I1209 04:18:38.662615 1181690 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/ssl/certs/11442312.pem --> /etc/ssl/certs/11442312.pem (1708 bytes)
	I1209 04:18:38.678880 1181690 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/test/nested/copy/1144231/hosts --> /etc/test/nested/copy/1144231/hosts (40 bytes)
	I1209 04:18:38.696568 1181690 start.go:296] duration metric: took 170.019026ms for postStartSetup
	I1209 04:18:38.696969 1181690 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-667319
	I1209 04:18:38.712904 1181690 profile.go:143] Saving config to /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/config.json ...
	I1209 04:18:38.713170 1181690 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1209 04:18:38.713212 1181690 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-667319
	I1209 04:18:38.729199 1181690 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/functional-667319/id_rsa Username:docker}
	I1209 04:18:38.832870 1181690 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1209 04:18:38.837512 1181690 start.go:128] duration metric: took 9.416071818s to createHost
	I1209 04:18:38.837527 1181690 start.go:83] releasing machines lock for "functional-667319", held for 9.416176431s
	I1209 04:18:38.837596 1181690 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-667319
	I1209 04:18:38.858538 1181690 out.go:179] * Found network options:
	I1209 04:18:38.861514 1181690 out.go:179]   - HTTP_PROXY=localhost:34739
	W1209 04:18:38.864664 1181690 out.go:285] ! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
	I1209 04:18:38.867745 1181690 out.go:179] * Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
	I1209 04:18:38.870715 1181690 ssh_runner.go:195] Run: cat /version.json
	I1209 04:18:38.870774 1181690 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-667319
	I1209 04:18:38.870814 1181690 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 04:18:38.870880 1181690 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-667319
	I1209 04:18:38.890006 1181690 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/functional-667319/id_rsa Username:docker}
	I1209 04:18:38.890448 1181690 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/functional-667319/id_rsa Username:docker}
	I1209 04:18:38.992431 1181690 ssh_runner.go:195] Run: systemctl --version
	I1209 04:18:39.078214 1181690 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 04:18:39.082636 1181690 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 04:18:39.082725 1181690 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 04:18:39.109371 1181690 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1209 04:18:39.109385 1181690 start.go:496] detecting cgroup driver to use...
	I1209 04:18:39.109415 1181690 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1209 04:18:39.109476 1181690 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1209 04:18:39.124209 1181690 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1209 04:18:39.137718 1181690 docker.go:218] disabling cri-docker service (if available) ...
	I1209 04:18:39.137785 1181690 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 04:18:39.157200 1181690 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 04:18:39.175756 1181690 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 04:18:39.293090 1181690 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 04:18:39.415035 1181690 docker.go:234] disabling docker service ...
	I1209 04:18:39.415091 1181690 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 04:18:39.436000 1181690 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 04:18:39.450194 1181690 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 04:18:39.569102 1181690 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 04:18:39.685732 1181690 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 04:18:39.698755 1181690 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 04:18:39.713555 1181690 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1209 04:18:39.722194 1181690 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1209 04:18:39.731110 1181690 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1209 04:18:39.731171 1181690 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1209 04:18:39.740099 1181690 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1209 04:18:39.748613 1181690 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1209 04:18:39.757268 1181690 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1209 04:18:39.765996 1181690 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 04:18:39.773811 1181690 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1209 04:18:39.782322 1181690 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1209 04:18:39.790681 1181690 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1209 04:18:39.798963 1181690 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 04:18:39.806177 1181690 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 04:18:39.813312 1181690 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 04:18:39.922820 1181690 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1209 04:18:40.082191 1181690 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1209 04:18:40.082285 1181690 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1209 04:18:40.087257 1181690 start.go:564] Will wait 60s for crictl version
	I1209 04:18:40.087330 1181690 ssh_runner.go:195] Run: which crictl
	I1209 04:18:40.091598 1181690 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1209 04:18:40.124551 1181690 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1209 04:18:40.124640 1181690 ssh_runner.go:195] Run: containerd --version
	I1209 04:18:40.147833 1181690 ssh_runner.go:195] Run: containerd --version
	I1209 04:18:40.173200 1181690 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1209 04:18:40.176165 1181690 cli_runner.go:164] Run: docker network inspect functional-667319 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1209 04:18:40.193118 1181690 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1209 04:18:40.197226 1181690 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 04:18:40.207276 1181690 kubeadm.go:884] updating cluster {Name:functional-667319 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-667319 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 04:18:40.207380 1181690 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1209 04:18:40.207446 1181690 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 04:18:40.234910 1181690 containerd.go:627] all images are preloaded for containerd runtime.
	I1209 04:18:40.234922 1181690 containerd.go:534] Images already preloaded, skipping extraction
	I1209 04:18:40.234983 1181690 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 04:18:40.259599 1181690 containerd.go:627] all images are preloaded for containerd runtime.
	I1209 04:18:40.259611 1181690 cache_images.go:86] Images are preloaded, skipping loading
	I1209 04:18:40.259617 1181690 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
	I1209 04:18:40.259711 1181690 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-667319 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-667319 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 04:18:40.259776 1181690 ssh_runner.go:195] Run: sudo crictl info
	I1209 04:18:40.284519 1181690 cni.go:84] Creating CNI manager for ""
	I1209 04:18:40.284529 1181690 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1209 04:18:40.284548 1181690 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1209 04:18:40.284572 1181690 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-667319 NodeName:functional-667319 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1209 04:18:40.284687 1181690 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-667319"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 04:18:40.284754 1181690 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1209 04:18:40.292756 1181690 binaries.go:51] Found k8s binaries, skipping transfer
	I1209 04:18:40.292818 1181690 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1209 04:18:40.300556 1181690 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1209 04:18:40.313475 1181690 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1209 04:18:40.325992 1181690 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1209 04:18:40.338439 1181690 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1209 04:18:40.341964 1181690 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 04:18:40.351214 1181690 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 04:18:40.465426 1181690 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 04:18:40.481455 1181690 certs.go:69] Setting up /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319 for IP: 192.168.49.2
	I1209 04:18:40.481466 1181690 certs.go:195] generating shared ca certs ...
	I1209 04:18:40.481480 1181690 certs.go:227] acquiring lock for ca certs: {Name:mk15788702f8c4e23b5aeab3f44961d296fab259 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 04:18:40.481621 1181690 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.key
	I1209 04:18:40.481670 1181690 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/proxy-client-ca.key
	I1209 04:18:40.481676 1181690 certs.go:257] generating profile certs ...
	I1209 04:18:40.481731 1181690 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/client.key
	I1209 04:18:40.481741 1181690 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/client.crt with IP's: []
	I1209 04:18:41.080936 1181690 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/client.crt ...
	I1209 04:18:41.080952 1181690 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/client.crt: {Name:mkf3bdb384a02d9ddee4d4fb76ce831c03b056f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 04:18:41.081154 1181690 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/client.key ...
	I1209 04:18:41.081160 1181690 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/client.key: {Name:mk720c9ccfeaa8a2cd0ee2bda926880388858ed6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 04:18:41.081261 1181690 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/apiserver.key.c80eb595
	I1209 04:18:41.081272 1181690 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/apiserver.crt.c80eb595 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1209 04:18:41.180008 1181690 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/apiserver.crt.c80eb595 ...
	I1209 04:18:41.180027 1181690 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/apiserver.crt.c80eb595: {Name:mka2e19c54b03f20264fe636ee16e2a33aeb03cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 04:18:41.180179 1181690 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/apiserver.key.c80eb595 ...
	I1209 04:18:41.180186 1181690 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/apiserver.key.c80eb595: {Name:mkd1a25832b55ee9080d9aec7535ff72db49438e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 04:18:41.180265 1181690 certs.go:382] copying /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/apiserver.crt.c80eb595 -> /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/apiserver.crt
	I1209 04:18:41.180336 1181690 certs.go:386] copying /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/apiserver.key.c80eb595 -> /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/apiserver.key
	I1209 04:18:41.180389 1181690 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/proxy-client.key
	I1209 04:18:41.180400 1181690 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/proxy-client.crt with IP's: []
	I1209 04:18:41.444871 1181690 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/proxy-client.crt ...
	I1209 04:18:41.444887 1181690 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/proxy-client.crt: {Name:mk1009b33da5d5106bac6bded991980213b8309e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 04:18:41.445080 1181690 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/proxy-client.key ...
	I1209 04:18:41.445089 1181690 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/proxy-client.key: {Name:mk15adbf16c2803947edf00d6e11a4e92bbad30f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 04:18:41.445281 1181690 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/1144231.pem (1338 bytes)
	W1209 04:18:41.445326 1181690 certs.go:480] ignoring /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/1144231_empty.pem, impossibly tiny 0 bytes
	I1209 04:18:41.445335 1181690 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca-key.pem (1679 bytes)
	I1209 04:18:41.445361 1181690 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem (1078 bytes)
	I1209 04:18:41.445387 1181690 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/cert.pem (1123 bytes)
	I1209 04:18:41.445409 1181690 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/key.pem (1675 bytes)
	I1209 04:18:41.445451 1181690 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/ssl/certs/11442312.pem (1708 bytes)
	I1209 04:18:41.446018 1181690 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 04:18:41.464483 1181690 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1209 04:18:41.485635 1181690 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 04:18:41.505616 1181690 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1209 04:18:41.527249 1181690 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1209 04:18:41.545105 1181690 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1209 04:18:41.562186 1181690 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 04:18:41.579876 1181690 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1209 04:18:41.596558 1181690 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/1144231.pem --> /usr/share/ca-certificates/1144231.pem (1338 bytes)
	I1209 04:18:41.613276 1181690 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/ssl/certs/11442312.pem --> /usr/share/ca-certificates/11442312.pem (1708 bytes)
	I1209 04:18:41.629730 1181690 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 04:18:41.646953 1181690 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 04:18:41.659251 1181690 ssh_runner.go:195] Run: openssl version
	I1209 04:18:41.665347 1181690 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/11442312.pem
	I1209 04:18:41.672518 1181690 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/11442312.pem /etc/ssl/certs/11442312.pem
	I1209 04:18:41.679415 1181690 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11442312.pem
	I1209 04:18:41.683303 1181690 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 04:18 /usr/share/ca-certificates/11442312.pem
	I1209 04:18:41.683360 1181690 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11442312.pem
	I1209 04:18:41.732527 1181690 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1209 04:18:41.741038 1181690 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/11442312.pem /etc/ssl/certs/3ec20f2e.0
	I1209 04:18:41.748566 1181690 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1209 04:18:41.757170 1181690 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1209 04:18:41.764798 1181690 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 04:18:41.768796 1181690 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 04:09 /usr/share/ca-certificates/minikubeCA.pem
	I1209 04:18:41.768855 1181690 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 04:18:41.811587 1181690 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1209 04:18:41.819063 1181690 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1209 04:18:41.826649 1181690 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1144231.pem
	I1209 04:18:41.833938 1181690 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1144231.pem /etc/ssl/certs/1144231.pem
	I1209 04:18:41.841442 1181690 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1144231.pem
	I1209 04:18:41.845251 1181690 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 04:18 /usr/share/ca-certificates/1144231.pem
	I1209 04:18:41.845317 1181690 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1144231.pem
	I1209 04:18:41.886121 1181690 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1209 04:18:41.893802 1181690 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/1144231.pem /etc/ssl/certs/51391683.0
	I1209 04:18:41.902105 1181690 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 04:18:41.905774 1181690 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1209 04:18:41.905819 1181690 kubeadm.go:401] StartCluster: {Name:functional-667319 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-667319 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 04:18:41.905900 1181690 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1209 04:18:41.905956 1181690 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 04:18:41.935719 1181690 cri.go:89] found id: ""
	I1209 04:18:41.935783 1181690 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1209 04:18:41.944904 1181690 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 04:18:41.952908 1181690 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1209 04:18:41.952961 1181690 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 04:18:41.962481 1181690 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 04:18:41.962497 1181690 kubeadm.go:158] found existing configuration files:
	
	I1209 04:18:41.962550 1181690 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1209 04:18:41.970416 1181690 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 04:18:41.970469 1181690 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 04:18:41.978995 1181690 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1209 04:18:41.991899 1181690 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 04:18:41.991955 1181690 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 04:18:42.003475 1181690 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1209 04:18:42.017417 1181690 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 04:18:42.017499 1181690 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 04:18:42.027621 1181690 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1209 04:18:42.036501 1181690 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 04:18:42.036569 1181690 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 04:18:42.044636 1181690 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1209 04:18:42.092809 1181690 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1209 04:18:42.093759 1181690 kubeadm.go:319] [preflight] Running pre-flight checks
	I1209 04:18:42.245597 1181690 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1209 04:18:42.245662 1181690 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1209 04:18:42.245697 1181690 kubeadm.go:319] OS: Linux
	I1209 04:18:42.245742 1181690 kubeadm.go:319] CGROUPS_CPU: enabled
	I1209 04:18:42.245789 1181690 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1209 04:18:42.245835 1181690 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1209 04:18:42.245882 1181690 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1209 04:18:42.245929 1181690 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1209 04:18:42.245993 1181690 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1209 04:18:42.246038 1181690 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1209 04:18:42.246084 1181690 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1209 04:18:42.246129 1181690 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1209 04:18:42.333955 1181690 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1209 04:18:42.334060 1181690 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1209 04:18:42.334150 1181690 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1209 04:18:42.343117 1181690 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1209 04:18:42.349439 1181690 out.go:252]   - Generating certificates and keys ...
	I1209 04:18:42.349552 1181690 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1209 04:18:42.349632 1181690 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1209 04:18:42.400810 1181690 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1209 04:18:42.535382 1181690 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1209 04:18:42.634496 1181690 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1209 04:18:42.807234 1181690 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1209 04:18:42.923275 1181690 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1209 04:18:42.923574 1181690 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [functional-667319 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1209 04:18:43.417944 1181690 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1209 04:18:43.418318 1181690 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [functional-667319 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1209 04:18:43.745058 1181690 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1209 04:18:44.186210 1181690 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1209 04:18:44.275344 1181690 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1209 04:18:44.275578 1181690 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1209 04:18:44.711585 1181690 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1209 04:18:45.081990 1181690 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1209 04:18:45.294992 1181690 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1209 04:18:45.570864 1181690 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1209 04:18:45.710745 1181690 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1209 04:18:45.711703 1181690 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1209 04:18:45.714623 1181690 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1209 04:18:45.718165 1181690 out.go:252]   - Booting up control plane ...
	I1209 04:18:45.718259 1181690 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1209 04:18:45.718333 1181690 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1209 04:18:45.719287 1181690 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1209 04:18:45.739545 1181690 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1209 04:18:45.739646 1181690 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1209 04:18:45.746984 1181690 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1209 04:18:45.748352 1181690 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1209 04:18:45.748397 1181690 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1209 04:18:45.881377 1181690 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1209 04:18:45.881488 1181690 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1209 04:22:45.882359 1181690 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001146574s
	I1209 04:22:45.882383 1181690 kubeadm.go:319] 
	I1209 04:22:45.882446 1181690 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1209 04:22:45.882481 1181690 kubeadm.go:319] 	- The kubelet is not running
	I1209 04:22:45.882594 1181690 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1209 04:22:45.882601 1181690 kubeadm.go:319] 
	I1209 04:22:45.882713 1181690 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1209 04:22:45.882747 1181690 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1209 04:22:45.882781 1181690 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1209 04:22:45.882784 1181690 kubeadm.go:319] 
	I1209 04:22:45.887607 1181690 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1209 04:22:45.888097 1181690 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1209 04:22:45.888214 1181690 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1209 04:22:45.888453 1181690 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1209 04:22:45.888457 1181690 kubeadm.go:319] 
	W1209 04:22:45.888681 1181690 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [functional-667319 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [functional-667319 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001146574s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1209 04:22:45.888778 1181690 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1209 04:22:45.889053 1181690 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1209 04:22:46.296600 1181690 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 04:22:46.310736 1181690 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1209 04:22:46.310790 1181690 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 04:22:46.318580 1181690 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 04:22:46.318591 1181690 kubeadm.go:158] found existing configuration files:
	
	I1209 04:22:46.318644 1181690 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1209 04:22:46.326307 1181690 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 04:22:46.326369 1181690 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 04:22:46.333761 1181690 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1209 04:22:46.341698 1181690 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 04:22:46.341760 1181690 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 04:22:46.349579 1181690 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1209 04:22:46.357345 1181690 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 04:22:46.357404 1181690 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 04:22:46.364679 1181690 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1209 04:22:46.372004 1181690 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 04:22:46.372138 1181690 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 04:22:46.379457 1181690 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1209 04:22:46.418901 1181690 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1209 04:22:46.418949 1181690 kubeadm.go:319] [preflight] Running pre-flight checks
	I1209 04:22:46.487909 1181690 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1209 04:22:46.487973 1181690 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1209 04:22:46.488007 1181690 kubeadm.go:319] OS: Linux
	I1209 04:22:46.488071 1181690 kubeadm.go:319] CGROUPS_CPU: enabled
	I1209 04:22:46.488119 1181690 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1209 04:22:46.488164 1181690 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1209 04:22:46.488210 1181690 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1209 04:22:46.488258 1181690 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1209 04:22:46.488304 1181690 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1209 04:22:46.488347 1181690 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1209 04:22:46.488394 1181690 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1209 04:22:46.488439 1181690 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1209 04:22:46.554157 1181690 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1209 04:22:46.554260 1181690 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1209 04:22:46.554349 1181690 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1209 04:22:46.560519 1181690 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1209 04:22:46.565795 1181690 out.go:252]   - Generating certificates and keys ...
	I1209 04:22:46.565892 1181690 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1209 04:22:46.565970 1181690 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1209 04:22:46.566099 1181690 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1209 04:22:46.566188 1181690 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1209 04:22:46.566269 1181690 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1209 04:22:46.566326 1181690 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1209 04:22:46.566407 1181690 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1209 04:22:46.566472 1181690 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1209 04:22:46.566551 1181690 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1209 04:22:46.566665 1181690 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1209 04:22:46.566731 1181690 kubeadm.go:319] [certs] Using the existing "sa" key
	I1209 04:22:46.566860 1181690 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1209 04:22:47.111495 1181690 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1209 04:22:47.418844 1181690 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1209 04:22:47.540983 1181690 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1209 04:22:47.683464 1181690 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1209 04:22:47.836599 1181690 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1209 04:22:47.837234 1181690 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1209 04:22:47.839956 1181690 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1209 04:22:47.843279 1181690 out.go:252]   - Booting up control plane ...
	I1209 04:22:47.843384 1181690 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1209 04:22:47.843468 1181690 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1209 04:22:47.843539 1181690 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1209 04:22:47.863716 1181690 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1209 04:22:47.863849 1181690 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1209 04:22:47.872846 1181690 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1209 04:22:47.873555 1181690 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1209 04:22:47.873793 1181690 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1209 04:22:48.010450 1181690 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1209 04:22:48.010558 1181690 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1209 04:26:48.010112 1181690 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.00013653s
	I1209 04:26:48.010132 1181690 kubeadm.go:319] 
	I1209 04:26:48.010204 1181690 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1209 04:26:48.010237 1181690 kubeadm.go:319] 	- The kubelet is not running
	I1209 04:26:48.010382 1181690 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1209 04:26:48.010385 1181690 kubeadm.go:319] 
	I1209 04:26:48.010510 1181690 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1209 04:26:48.010552 1181690 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1209 04:26:48.010611 1181690 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1209 04:26:48.010617 1181690 kubeadm.go:319] 
	I1209 04:26:48.016867 1181690 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1209 04:26:48.017419 1181690 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1209 04:26:48.017550 1181690 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1209 04:26:48.017804 1181690 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1209 04:26:48.017808 1181690 kubeadm.go:319] 
	I1209 04:26:48.017926 1181690 kubeadm.go:403] duration metric: took 8m6.112110258s to StartCluster
	I1209 04:26:48.017931 1181690 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1209 04:26:48.017974 1181690 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:26:48.018040 1181690 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:26:48.045879 1181690 cri.go:89] found id: ""
	I1209 04:26:48.045893 1181690 logs.go:282] 0 containers: []
	W1209 04:26:48.045900 1181690 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:26:48.045905 1181690 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:26:48.045969 1181690 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:26:48.070505 1181690 cri.go:89] found id: ""
	I1209 04:26:48.070519 1181690 logs.go:282] 0 containers: []
	W1209 04:26:48.070526 1181690 logs.go:284] No container was found matching "etcd"
	I1209 04:26:48.070531 1181690 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:26:48.070591 1181690 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:26:48.096031 1181690 cri.go:89] found id: ""
	I1209 04:26:48.096061 1181690 logs.go:282] 0 containers: []
	W1209 04:26:48.096068 1181690 logs.go:284] No container was found matching "coredns"
	I1209 04:26:48.096074 1181690 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:26:48.096162 1181690 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:26:48.119513 1181690 cri.go:89] found id: ""
	I1209 04:26:48.119527 1181690 logs.go:282] 0 containers: []
	W1209 04:26:48.119534 1181690 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:26:48.119539 1181690 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:26:48.119599 1181690 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:26:48.144231 1181690 cri.go:89] found id: ""
	I1209 04:26:48.144245 1181690 logs.go:282] 0 containers: []
	W1209 04:26:48.144252 1181690 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:26:48.144257 1181690 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:26:48.144317 1181690 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:26:48.168267 1181690 cri.go:89] found id: ""
	I1209 04:26:48.168281 1181690 logs.go:282] 0 containers: []
	W1209 04:26:48.168287 1181690 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:26:48.168293 1181690 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:26:48.168353 1181690 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:26:48.196948 1181690 cri.go:89] found id: ""
	I1209 04:26:48.196961 1181690 logs.go:282] 0 containers: []
	W1209 04:26:48.196967 1181690 logs.go:284] No container was found matching "kindnet"
	I1209 04:26:48.196976 1181690 logs.go:123] Gathering logs for container status ...
	I1209 04:26:48.196986 1181690 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:26:48.226160 1181690 logs.go:123] Gathering logs for kubelet ...
	I1209 04:26:48.226176 1181690 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:26:48.282554 1181690 logs.go:123] Gathering logs for dmesg ...
	I1209 04:26:48.282573 1181690 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:26:48.299510 1181690 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:26:48.299527 1181690 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:26:48.367088 1181690 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:26:48.358889    4752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:26:48.359604    4752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:26:48.361171    4752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:26:48.361715    4752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:26:48.363317    4752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:26:48.358889    4752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:26:48.359604    4752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:26:48.361171    4752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:26:48.361715    4752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:26:48.363317    4752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:26:48.367098 1181690 logs.go:123] Gathering logs for containerd ...
	I1209 04:26:48.367109 1181690 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	W1209 04:26:48.405107 1181690 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00013653s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1209 04:26:48.405148 1181690 out.go:285] * 
	W1209 04:26:48.405832 1181690 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00013653s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1209 04:26:48.405980 1181690 out.go:285] * 
	W1209 04:26:48.408735 1181690 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 04:26:48.416479 1181690 out.go:203] 
	W1209 04:26:48.420159 1181690 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00013653s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1209 04:26:48.420196 1181690 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1209 04:26:48.420216 1181690 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1209 04:26:48.423364 1181690 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 09 04:18:40 functional-667319 containerd[759]: time="2025-12-09T04:18:39.999474819Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 09 04:18:40 functional-667319 containerd[759]: time="2025-12-09T04:18:39.999490547Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 09 04:18:40 functional-667319 containerd[759]: time="2025-12-09T04:18:39.999529192Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 09 04:18:40 functional-667319 containerd[759]: time="2025-12-09T04:18:39.999546644Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 09 04:18:40 functional-667319 containerd[759]: time="2025-12-09T04:18:39.999557294Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 09 04:18:40 functional-667319 containerd[759]: time="2025-12-09T04:18:39.999568174Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 09 04:18:40 functional-667319 containerd[759]: time="2025-12-09T04:18:39.999576813Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 09 04:18:40 functional-667319 containerd[759]: time="2025-12-09T04:18:39.999588218Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 09 04:18:40 functional-667319 containerd[759]: time="2025-12-09T04:18:39.999606868Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 09 04:18:40 functional-667319 containerd[759]: time="2025-12-09T04:18:39.999644224Z" level=info msg="Connect containerd service"
	Dec 09 04:18:40 functional-667319 containerd[759]: time="2025-12-09T04:18:39.999997616Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 09 04:18:40 functional-667319 containerd[759]: time="2025-12-09T04:18:40.000776464Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 09 04:18:40 functional-667319 containerd[759]: time="2025-12-09T04:18:40.028738895Z" level=info msg="Start subscribing containerd event"
	Dec 09 04:18:40 functional-667319 containerd[759]: time="2025-12-09T04:18:40.028845468Z" level=info msg="Start recovering state"
	Dec 09 04:18:40 functional-667319 containerd[759]: time="2025-12-09T04:18:40.029071282Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 09 04:18:40 functional-667319 containerd[759]: time="2025-12-09T04:18:40.029197145Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 09 04:18:40 functional-667319 containerd[759]: time="2025-12-09T04:18:40.077421160Z" level=info msg="Start event monitor"
	Dec 09 04:18:40 functional-667319 containerd[759]: time="2025-12-09T04:18:40.077708560Z" level=info msg="Start cni network conf syncer for default"
	Dec 09 04:18:40 functional-667319 containerd[759]: time="2025-12-09T04:18:40.077817956Z" level=info msg="Start streaming server"
	Dec 09 04:18:40 functional-667319 containerd[759]: time="2025-12-09T04:18:40.077918597Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 09 04:18:40 functional-667319 containerd[759]: time="2025-12-09T04:18:40.078152526Z" level=info msg="runtime interface starting up..."
	Dec 09 04:18:40 functional-667319 containerd[759]: time="2025-12-09T04:18:40.078256031Z" level=info msg="starting plugins..."
	Dec 09 04:18:40 functional-667319 containerd[759]: time="2025-12-09T04:18:40.078333099Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 09 04:18:40 functional-667319 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 09 04:18:40 functional-667319 containerd[759]: time="2025-12-09T04:18:40.081024715Z" level=info msg="containerd successfully booted in 0.104352s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:26:49.398510    4857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:26:49.398937    4857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:26:49.400525    4857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:26:49.401003    4857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:26:49.402582    4857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 9 03:13] overlayfs: idmapped layers are currently not supported
	[ +25.904254] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:14] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:16] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:18] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:19] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:30] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:32] overlayfs: idmapped layers are currently not supported
	[ +28.114653] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:33] overlayfs: idmapped layers are currently not supported
	[ +23.720849] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:34] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:35] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:36] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:37] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:38] overlayfs: idmapped layers are currently not supported
	[ +23.656275] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:39] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:57] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:58] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:00] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:02] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:03] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:05] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:06] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 04:26:49 up  7:08,  0 user,  load average: 0.08, 0.49, 1.11
	Linux functional-667319 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 09 04:26:46 functional-667319 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:26:46 functional-667319 kubelet[4664]: E1209 04:26:46.480101    4664 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 09 04:26:46 functional-667319 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 09 04:26:46 functional-667319 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 09 04:26:47 functional-667319 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 319.
	Dec 09 04:26:47 functional-667319 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:26:47 functional-667319 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:26:47 functional-667319 kubelet[4670]: E1209 04:26:47.225776    4670 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 09 04:26:47 functional-667319 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 09 04:26:47 functional-667319 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 09 04:26:47 functional-667319 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 09 04:26:47 functional-667319 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:26:47 functional-667319 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:26:47 functional-667319 kubelet[4675]: E1209 04:26:47.981193    4675 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 09 04:26:47 functional-667319 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 09 04:26:47 functional-667319 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 09 04:26:48 functional-667319 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 09 04:26:48 functional-667319 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:26:48 functional-667319 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:26:48 functional-667319 kubelet[4769]: E1209 04:26:48.755143    4769 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 09 04:26:48 functional-667319 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 09 04:26:48 functional-667319 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 09 04:26:49 functional-667319 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Dec 09 04:26:49 functional-667319 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:26:49 functional-667319 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-667319 -n functional-667319
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-667319 -n functional-667319: exit status 6 (322.024604ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1209 04:26:49.843522 1187351 status.go:458] kubeconfig endpoint: get endpoint: "functional-667319" does not appear in /home/jenkins/minikube-integration/22081-1142328/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "functional-667319" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (500.70s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (367.82s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart
I1209 04:26:49.859884 1144231 config.go:182] Loaded profile config "functional-667319": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-667319 --alsologtostderr -v=8
E1209 04:27:38.985507 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-717497/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 04:28:06.695651 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-717497/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 04:31:06.731663 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/addons-221952/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 04:32:29.812753 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/addons-221952/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 04:32:38.985378 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-717497/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-667319 --alsologtostderr -v=8: exit status 80 (6m5.098450528s)

                                                
                                                
-- stdout --
	* [functional-667319] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22081
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22081-1142328/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-1142328/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "functional-667319" primary control-plane node in "functional-667319" cluster
	* Pulling base image v0.0.48-1765184860-22066 ...
	* Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 04:26:49.901158 1187425 out.go:360] Setting OutFile to fd 1 ...
	I1209 04:26:49.901350 1187425 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 04:26:49.901380 1187425 out.go:374] Setting ErrFile to fd 2...
	I1209 04:26:49.901407 1187425 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 04:26:49.902126 1187425 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-1142328/.minikube/bin
	I1209 04:26:49.902570 1187425 out.go:368] Setting JSON to false
	I1209 04:26:49.903455 1187425 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":25733,"bootTime":1765228677,"procs":156,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1209 04:26:49.903532 1187425 start.go:143] virtualization:  
	I1209 04:26:49.907035 1187425 out.go:179] * [functional-667319] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1209 04:26:49.910766 1187425 out.go:179]   - MINIKUBE_LOCATION=22081
	I1209 04:26:49.910878 1187425 notify.go:221] Checking for updates...
	I1209 04:26:49.916570 1187425 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 04:26:49.919423 1187425 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22081-1142328/kubeconfig
	I1209 04:26:49.922184 1187425 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-1142328/.minikube
	I1209 04:26:49.924947 1187425 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1209 04:26:49.927723 1187425 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 04:26:49.930999 1187425 config.go:182] Loaded profile config "functional-667319": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1209 04:26:49.931139 1187425 driver.go:422] Setting default libvirt URI to qemu:///system
	I1209 04:26:49.958230 1187425 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1209 04:26:49.958344 1187425 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 04:26:50.018007 1187425 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-09 04:26:50.006695366 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1209 04:26:50.018130 1187425 docker.go:319] overlay module found
	I1209 04:26:50.021068 1187425 out.go:179] * Using the docker driver based on existing profile
	I1209 04:26:50.024068 1187425 start.go:309] selected driver: docker
	I1209 04:26:50.024096 1187425 start.go:927] validating driver "docker" against &{Name:functional-667319 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-667319 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disa
bleCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 04:26:50.024203 1187425 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 04:26:50.024322 1187425 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 04:26:50.086853 1187425 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-09 04:26:50.07716198 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1209 04:26:50.087299 1187425 cni.go:84] Creating CNI manager for ""
	I1209 04:26:50.087371 1187425 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1209 04:26:50.087429 1187425 start.go:353] cluster config:
	{Name:functional-667319 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-667319 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPa
th: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 04:26:50.090570 1187425 out.go:179] * Starting "functional-667319" primary control-plane node in "functional-667319" cluster
	I1209 04:26:50.093453 1187425 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1209 04:26:50.098431 1187425 out.go:179] * Pulling base image v0.0.48-1765184860-22066 ...
	I1209 04:26:50.101405 1187425 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1209 04:26:50.101471 1187425 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local docker daemon
	I1209 04:26:50.101485 1187425 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1209 04:26:50.101503 1187425 cache.go:65] Caching tarball of preloaded images
	I1209 04:26:50.101600 1187425 preload.go:238] Found /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1209 04:26:50.101616 1187425 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1209 04:26:50.101720 1187425 profile.go:143] Saving config to /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/config.json ...
	I1209 04:26:50.125607 1187425 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local docker daemon, skipping pull
	I1209 04:26:50.125633 1187425 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c exists in daemon, skipping load
	I1209 04:26:50.125648 1187425 cache.go:243] Successfully downloaded all kic artifacts
	I1209 04:26:50.125680 1187425 start.go:360] acquireMachinesLock for functional-667319: {Name:mk6c31f0747796f5f8ac8ea1653d6ee60fe2a47d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 04:26:50.125839 1187425 start.go:364] duration metric: took 130.318µs to acquireMachinesLock for "functional-667319"
	I1209 04:26:50.125869 1187425 start.go:96] Skipping create...Using existing machine configuration
	I1209 04:26:50.125878 1187425 fix.go:54] fixHost starting: 
	I1209 04:26:50.126147 1187425 cli_runner.go:164] Run: docker container inspect functional-667319 --format={{.State.Status}}
	I1209 04:26:50.147043 1187425 fix.go:112] recreateIfNeeded on functional-667319: state=Running err=<nil>
	W1209 04:26:50.147073 1187425 fix.go:138] unexpected machine state, will restart: <nil>
	I1209 04:26:50.150254 1187425 out.go:252] * Updating the running docker "functional-667319" container ...
	I1209 04:26:50.150291 1187425 machine.go:94] provisionDockerMachine start ...
	I1209 04:26:50.150379 1187425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-667319
	I1209 04:26:50.167513 1187425 main.go:143] libmachine: Using SSH client type: native
	I1209 04:26:50.167851 1187425 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 33900 <nil> <nil>}
	I1209 04:26:50.167868 1187425 main.go:143] libmachine: About to run SSH command:
	hostname
	I1209 04:26:50.327552 1187425 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-667319
	
	I1209 04:26:50.327578 1187425 ubuntu.go:182] provisioning hostname "functional-667319"
	I1209 04:26:50.327642 1187425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-667319
	I1209 04:26:50.345440 1187425 main.go:143] libmachine: Using SSH client type: native
	I1209 04:26:50.345757 1187425 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 33900 <nil> <nil>}
	I1209 04:26:50.345775 1187425 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-667319 && echo "functional-667319" | sudo tee /etc/hostname
	I1209 04:26:50.504917 1187425 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-667319
	
	I1209 04:26:50.505070 1187425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-667319
	I1209 04:26:50.522734 1187425 main.go:143] libmachine: Using SSH client type: native
	I1209 04:26:50.523054 1187425 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 33900 <nil> <nil>}
	I1209 04:26:50.523070 1187425 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-667319' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-667319/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-667319' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 04:26:50.676107 1187425 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1209 04:26:50.676133 1187425 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22081-1142328/.minikube CaCertPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22081-1142328/.minikube}
	I1209 04:26:50.676165 1187425 ubuntu.go:190] setting up certificates
	I1209 04:26:50.676182 1187425 provision.go:84] configureAuth start
	I1209 04:26:50.676245 1187425 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-667319
	I1209 04:26:50.692809 1187425 provision.go:143] copyHostCerts
	I1209 04:26:50.692850 1187425 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.pem
	I1209 04:26:50.692881 1187425 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.pem, removing ...
	I1209 04:26:50.692892 1187425 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.pem
	I1209 04:26:50.692964 1187425 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.pem (1078 bytes)
	I1209 04:26:50.693060 1187425 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22081-1142328/.minikube/cert.pem
	I1209 04:26:50.693088 1187425 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-1142328/.minikube/cert.pem, removing ...
	I1209 04:26:50.693096 1187425 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-1142328/.minikube/cert.pem
	I1209 04:26:50.693122 1187425 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22081-1142328/.minikube/cert.pem (1123 bytes)
	I1209 04:26:50.693175 1187425 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22081-1142328/.minikube/key.pem
	I1209 04:26:50.693199 1187425 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-1142328/.minikube/key.pem, removing ...
	I1209 04:26:50.693206 1187425 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-1142328/.minikube/key.pem
	I1209 04:26:50.693233 1187425 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22081-1142328/.minikube/key.pem (1675 bytes)
	I1209 04:26:50.693287 1187425 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca-key.pem org=jenkins.functional-667319 san=[127.0.0.1 192.168.49.2 functional-667319 localhost minikube]
	I1209 04:26:50.808459 1187425 provision.go:177] copyRemoteCerts
	I1209 04:26:50.808521 1187425 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 04:26:50.808568 1187425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-667319
	I1209 04:26:50.825015 1187425 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/functional-667319/id_rsa Username:docker}
	I1209 04:26:50.931904 1187425 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1209 04:26:50.931970 1187425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1209 04:26:50.950373 1187425 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1209 04:26:50.950430 1187425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1209 04:26:50.967052 1187425 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1209 04:26:50.967110 1187425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1209 04:26:50.984302 1187425 provision.go:87] duration metric: took 308.098174ms to configureAuth
	I1209 04:26:50.984386 1187425 ubuntu.go:206] setting minikube options for container-runtime
	I1209 04:26:50.984596 1187425 config.go:182] Loaded profile config "functional-667319": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1209 04:26:50.984634 1187425 machine.go:97] duration metric: took 834.335015ms to provisionDockerMachine
	I1209 04:26:50.984656 1187425 start.go:293] postStartSetup for "functional-667319" (driver="docker")
	I1209 04:26:50.984680 1187425 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 04:26:50.984759 1187425 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 04:26:50.984834 1187425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-667319
	I1209 04:26:51.005808 1187425 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/functional-667319/id_rsa Username:docker}
	I1209 04:26:51.112821 1187425 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 04:26:51.116496 1187425 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1209 04:26:51.116518 1187425 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1209 04:26:51.116523 1187425 command_runner.go:130] > VERSION_ID="12"
	I1209 04:26:51.116528 1187425 command_runner.go:130] > VERSION="12 (bookworm)"
	I1209 04:26:51.116532 1187425 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1209 04:26:51.116536 1187425 command_runner.go:130] > ID=debian
	I1209 04:26:51.116540 1187425 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1209 04:26:51.116545 1187425 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1209 04:26:51.116554 1187425 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1209 04:26:51.116627 1187425 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1209 04:26:51.116648 1187425 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1209 04:26:51.116659 1187425 filesync.go:126] Scanning /home/jenkins/minikube-integration/22081-1142328/.minikube/addons for local assets ...
	I1209 04:26:51.116715 1187425 filesync.go:126] Scanning /home/jenkins/minikube-integration/22081-1142328/.minikube/files for local assets ...
	I1209 04:26:51.116799 1187425 filesync.go:149] local asset: /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/ssl/certs/11442312.pem -> 11442312.pem in /etc/ssl/certs
	I1209 04:26:51.116806 1187425 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/ssl/certs/11442312.pem -> /etc/ssl/certs/11442312.pem
	I1209 04:26:51.116882 1187425 filesync.go:149] local asset: /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/test/nested/copy/1144231/hosts -> hosts in /etc/test/nested/copy/1144231
	I1209 04:26:51.116886 1187425 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/test/nested/copy/1144231/hosts -> /etc/test/nested/copy/1144231/hosts
	I1209 04:26:51.116933 1187425 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1144231
	I1209 04:26:51.124908 1187425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/ssl/certs/11442312.pem --> /etc/ssl/certs/11442312.pem (1708 bytes)
	I1209 04:26:51.143368 1187425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/test/nested/copy/1144231/hosts --> /etc/test/nested/copy/1144231/hosts (40 bytes)
	I1209 04:26:51.161824 1187425 start.go:296] duration metric: took 177.139225ms for postStartSetup
	I1209 04:26:51.161916 1187425 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1209 04:26:51.161982 1187425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-667319
	I1209 04:26:51.181271 1187425 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/functional-667319/id_rsa Username:docker}
	I1209 04:26:51.284406 1187425 command_runner.go:130] > 12%
	I1209 04:26:51.284922 1187425 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1209 04:26:51.288619 1187425 command_runner.go:130] > 172G
	I1209 04:26:51.288953 1187425 fix.go:56] duration metric: took 1.163071262s for fixHost
	I1209 04:26:51.288968 1187425 start.go:83] releasing machines lock for "functional-667319", held for 1.163111146s
	I1209 04:26:51.289042 1187425 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-667319
	I1209 04:26:51.305835 1187425 ssh_runner.go:195] Run: cat /version.json
	I1209 04:26:51.305885 1187425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-667319
	I1209 04:26:51.305897 1187425 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 04:26:51.305950 1187425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-667319
	I1209 04:26:51.325384 1187425 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/functional-667319/id_rsa Username:docker}
	I1209 04:26:51.327293 1187425 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/functional-667319/id_rsa Username:docker}
	I1209 04:26:51.427270 1187425 command_runner.go:130] > {"iso_version": "v1.37.0-1764843329-22032", "kicbase_version": "v0.0.48-1765184860-22066", "minikube_version": "v1.37.0", "commit": "27bcd52be11288bda2f9abde063aa47b22607695"}
	I1209 04:26:51.427541 1187425 ssh_runner.go:195] Run: systemctl --version
	I1209 04:26:51.517549 1187425 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1209 04:26:51.520210 1187425 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1209 04:26:51.520243 1187425 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1209 04:26:51.520320 1187425 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1209 04:26:51.524536 1187425 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1209 04:26:51.524574 1187425 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 04:26:51.524644 1187425 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 04:26:51.532138 1187425 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1209 04:26:51.532170 1187425 start.go:496] detecting cgroup driver to use...
	I1209 04:26:51.532202 1187425 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1209 04:26:51.532264 1187425 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1209 04:26:51.547055 1187425 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1209 04:26:51.559544 1187425 docker.go:218] disabling cri-docker service (if available) ...
	I1209 04:26:51.559644 1187425 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 04:26:51.574821 1187425 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 04:26:51.587447 1187425 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 04:26:51.703845 1187425 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 04:26:51.839863 1187425 docker.go:234] disabling docker service ...
	I1209 04:26:51.839930 1187425 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 04:26:51.856255 1187425 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 04:26:51.869081 1187425 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 04:26:51.995560 1187425 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 04:26:52.125293 1187425 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 04:26:52.137749 1187425 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 04:26:52.150135 1187425 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1209 04:26:52.151507 1187425 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1209 04:26:52.160197 1187425 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1209 04:26:52.168921 1187425 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1209 04:26:52.169008 1187425 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1209 04:26:52.177592 1187425 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1209 04:26:52.185997 1187425 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1209 04:26:52.194259 1187425 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1209 04:26:52.202620 1187425 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 04:26:52.210466 1187425 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1209 04:26:52.219232 1187425 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1209 04:26:52.227579 1187425 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1209 04:26:52.236059 1187425 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 04:26:52.242619 1187425 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1209 04:26:52.243485 1187425 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 04:26:52.250890 1187425 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 04:26:52.361246 1187425 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1209 04:26:52.490552 1187425 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1209 04:26:52.490653 1187425 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1209 04:26:52.497112 1187425 command_runner.go:130] >   File: /run/containerd/containerd.sock
	I1209 04:26:52.497174 1187425 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1209 04:26:52.497206 1187425 command_runner.go:130] > Device: 0,72	Inode: 1613        Links: 1
	I1209 04:26:52.497227 1187425 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1209 04:26:52.497247 1187425 command_runner.go:130] > Access: 2025-12-09 04:26:52.442263978 +0000
	I1209 04:26:52.497281 1187425 command_runner.go:130] > Modify: 2025-12-09 04:26:52.442263978 +0000
	I1209 04:26:52.497301 1187425 command_runner.go:130] > Change: 2025-12-09 04:26:52.442263978 +0000
	I1209 04:26:52.497319 1187425 command_runner.go:130] >  Birth: -
	I1209 04:26:52.497534 1187425 start.go:564] Will wait 60s for crictl version
	I1209 04:26:52.497619 1187425 ssh_runner.go:195] Run: which crictl
	I1209 04:26:52.501257 1187425 command_runner.go:130] > /usr/local/bin/crictl
	I1209 04:26:52.502001 1187425 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1209 04:26:52.535942 1187425 command_runner.go:130] > Version:  0.1.0
	I1209 04:26:52.535964 1187425 command_runner.go:130] > RuntimeName:  containerd
	I1209 04:26:52.535970 1187425 command_runner.go:130] > RuntimeVersion:  v2.2.0
	I1209 04:26:52.535975 1187425 command_runner.go:130] > RuntimeApiVersion:  v1
	I1209 04:26:52.535985 1187425 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1209 04:26:52.536096 1187425 ssh_runner.go:195] Run: containerd --version
	I1209 04:26:52.556939 1187425 command_runner.go:130] > containerd containerd.io v2.2.0 1c4457e00facac03ce1d75f7b6777a7a851e5c41
	I1209 04:26:52.562389 1187425 ssh_runner.go:195] Run: containerd --version
	I1209 04:26:52.582187 1187425 command_runner.go:130] > containerd containerd.io v2.2.0 1c4457e00facac03ce1d75f7b6777a7a851e5c41
	I1209 04:26:52.587659 1187425 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1209 04:26:52.590705 1187425 cli_runner.go:164] Run: docker network inspect functional-667319 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1209 04:26:52.606900 1187425 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1209 04:26:52.610849 1187425 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1209 04:26:52.610974 1187425 kubeadm.go:884] updating cluster {Name:functional-667319 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-667319 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 04:26:52.611074 1187425 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1209 04:26:52.611135 1187425 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 04:26:52.634142 1187425 command_runner.go:130] > {
	I1209 04:26:52.634161 1187425 command_runner.go:130] >   "images":  [
	I1209 04:26:52.634166 1187425 command_runner.go:130] >     {
	I1209 04:26:52.634175 1187425 command_runner.go:130] >       "id":  "sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1209 04:26:52.634180 1187425 command_runner.go:130] >       "repoTags":  [
	I1209 04:26:52.634186 1187425 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1209 04:26:52.634190 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.634194 1187425 command_runner.go:130] >       "repoDigests":  [
	I1209 04:26:52.634210 1187425 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"
	I1209 04:26:52.634213 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.634218 1187425 command_runner.go:130] >       "size":  "40636774",
	I1209 04:26:52.634222 1187425 command_runner.go:130] >       "username":  "",
	I1209 04:26:52.634230 1187425 command_runner.go:130] >       "pinned":  false
	I1209 04:26:52.634233 1187425 command_runner.go:130] >     },
	I1209 04:26:52.634236 1187425 command_runner.go:130] >     {
	I1209 04:26:52.634246 1187425 command_runner.go:130] >       "id":  "sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1209 04:26:52.634251 1187425 command_runner.go:130] >       "repoTags":  [
	I1209 04:26:52.634256 1187425 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1209 04:26:52.634259 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.634263 1187425 command_runner.go:130] >       "repoDigests":  [
	I1209 04:26:52.634271 1187425 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1209 04:26:52.634274 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.634278 1187425 command_runner.go:130] >       "size":  "8034419",
	I1209 04:26:52.634282 1187425 command_runner.go:130] >       "username":  "",
	I1209 04:26:52.634286 1187425 command_runner.go:130] >       "pinned":  false
	I1209 04:26:52.634289 1187425 command_runner.go:130] >     },
	I1209 04:26:52.634292 1187425 command_runner.go:130] >     {
	I1209 04:26:52.634298 1187425 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1209 04:26:52.634302 1187425 command_runner.go:130] >       "repoTags":  [
	I1209 04:26:52.634307 1187425 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1209 04:26:52.634310 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.634317 1187425 command_runner.go:130] >       "repoDigests":  [
	I1209 04:26:52.634325 1187425 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"
	I1209 04:26:52.634328 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.634333 1187425 command_runner.go:130] >       "size":  "21168808",
	I1209 04:26:52.634337 1187425 command_runner.go:130] >       "username":  "nonroot",
	I1209 04:26:52.634341 1187425 command_runner.go:130] >       "pinned":  false
	I1209 04:26:52.634349 1187425 command_runner.go:130] >     },
	I1209 04:26:52.634355 1187425 command_runner.go:130] >     {
	I1209 04:26:52.634362 1187425 command_runner.go:130] >       "id":  "sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1209 04:26:52.634367 1187425 command_runner.go:130] >       "repoTags":  [
	I1209 04:26:52.634372 1187425 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1209 04:26:52.634375 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.634379 1187425 command_runner.go:130] >       "repoDigests":  [
	I1209 04:26:52.634387 1187425 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"
	I1209 04:26:52.634393 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.634397 1187425 command_runner.go:130] >       "size":  "21136588",
	I1209 04:26:52.634402 1187425 command_runner.go:130] >       "uid":  {
	I1209 04:26:52.634405 1187425 command_runner.go:130] >         "value":  "0"
	I1209 04:26:52.634408 1187425 command_runner.go:130] >       },
	I1209 04:26:52.634412 1187425 command_runner.go:130] >       "username":  "",
	I1209 04:26:52.634415 1187425 command_runner.go:130] >       "pinned":  false
	I1209 04:26:52.634418 1187425 command_runner.go:130] >     },
	I1209 04:26:52.634421 1187425 command_runner.go:130] >     {
	I1209 04:26:52.634428 1187425 command_runner.go:130] >       "id":  "sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1209 04:26:52.634431 1187425 command_runner.go:130] >       "repoTags":  [
	I1209 04:26:52.634437 1187425 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1209 04:26:52.634440 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.634443 1187425 command_runner.go:130] >       "repoDigests":  [
	I1209 04:26:52.634451 1187425 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58"
	I1209 04:26:52.634453 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.634457 1187425 command_runner.go:130] >       "size":  "24678359",
	I1209 04:26:52.634461 1187425 command_runner.go:130] >       "uid":  {
	I1209 04:26:52.634468 1187425 command_runner.go:130] >         "value":  "0"
	I1209 04:26:52.634471 1187425 command_runner.go:130] >       },
	I1209 04:26:52.634474 1187425 command_runner.go:130] >       "username":  "",
	I1209 04:26:52.634478 1187425 command_runner.go:130] >       "pinned":  false
	I1209 04:26:52.634480 1187425 command_runner.go:130] >     },
	I1209 04:26:52.634483 1187425 command_runner.go:130] >     {
	I1209 04:26:52.634490 1187425 command_runner.go:130] >       "id":  "sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1209 04:26:52.634493 1187425 command_runner.go:130] >       "repoTags":  [
	I1209 04:26:52.634499 1187425 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1209 04:26:52.634501 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.634505 1187425 command_runner.go:130] >       "repoDigests":  [
	I1209 04:26:52.634513 1187425 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d"
	I1209 04:26:52.634516 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.634520 1187425 command_runner.go:130] >       "size":  "20661043",
	I1209 04:26:52.634523 1187425 command_runner.go:130] >       "uid":  {
	I1209 04:26:52.634532 1187425 command_runner.go:130] >         "value":  "0"
	I1209 04:26:52.634535 1187425 command_runner.go:130] >       },
	I1209 04:26:52.634539 1187425 command_runner.go:130] >       "username":  "",
	I1209 04:26:52.634543 1187425 command_runner.go:130] >       "pinned":  false
	I1209 04:26:52.634546 1187425 command_runner.go:130] >     },
	I1209 04:26:52.634548 1187425 command_runner.go:130] >     {
	I1209 04:26:52.634555 1187425 command_runner.go:130] >       "id":  "sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1209 04:26:52.634558 1187425 command_runner.go:130] >       "repoTags":  [
	I1209 04:26:52.634563 1187425 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1209 04:26:52.634566 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.634569 1187425 command_runner.go:130] >       "repoDigests":  [
	I1209 04:26:52.634577 1187425 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1209 04:26:52.634580 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.634583 1187425 command_runner.go:130] >       "size":  "22429671",
	I1209 04:26:52.634587 1187425 command_runner.go:130] >       "username":  "",
	I1209 04:26:52.634591 1187425 command_runner.go:130] >       "pinned":  false
	I1209 04:26:52.634594 1187425 command_runner.go:130] >     },
	I1209 04:26:52.634597 1187425 command_runner.go:130] >     {
	I1209 04:26:52.634604 1187425 command_runner.go:130] >       "id":  "sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1209 04:26:52.634607 1187425 command_runner.go:130] >       "repoTags":  [
	I1209 04:26:52.634613 1187425 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1209 04:26:52.634616 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.634620 1187425 command_runner.go:130] >       "repoDigests":  [
	I1209 04:26:52.634627 1187425 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6"
	I1209 04:26:52.634630 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.634634 1187425 command_runner.go:130] >       "size":  "15391364",
	I1209 04:26:52.634638 1187425 command_runner.go:130] >       "uid":  {
	I1209 04:26:52.634641 1187425 command_runner.go:130] >         "value":  "0"
	I1209 04:26:52.634644 1187425 command_runner.go:130] >       },
	I1209 04:26:52.634649 1187425 command_runner.go:130] >       "username":  "",
	I1209 04:26:52.634653 1187425 command_runner.go:130] >       "pinned":  false
	I1209 04:26:52.634655 1187425 command_runner.go:130] >     },
	I1209 04:26:52.634659 1187425 command_runner.go:130] >     {
	I1209 04:26:52.634670 1187425 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1209 04:26:52.634674 1187425 command_runner.go:130] >       "repoTags":  [
	I1209 04:26:52.634678 1187425 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1209 04:26:52.634681 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.634685 1187425 command_runner.go:130] >       "repoDigests":  [
	I1209 04:26:52.634693 1187425 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"
	I1209 04:26:52.634695 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.634699 1187425 command_runner.go:130] >       "size":  "267939",
	I1209 04:26:52.634703 1187425 command_runner.go:130] >       "uid":  {
	I1209 04:26:52.634706 1187425 command_runner.go:130] >         "value":  "65535"
	I1209 04:26:52.634709 1187425 command_runner.go:130] >       },
	I1209 04:26:52.634713 1187425 command_runner.go:130] >       "username":  "",
	I1209 04:26:52.634717 1187425 command_runner.go:130] >       "pinned":  true
	I1209 04:26:52.634720 1187425 command_runner.go:130] >     }
	I1209 04:26:52.634723 1187425 command_runner.go:130] >   ]
	I1209 04:26:52.634726 1187425 command_runner.go:130] > }
	I1209 04:26:52.636238 1187425 containerd.go:627] all images are preloaded for containerd runtime.
	I1209 04:26:52.636265 1187425 containerd.go:534] Images already preloaded, skipping extraction
	I1209 04:26:52.636328 1187425 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 04:26:52.662300 1187425 command_runner.go:130] > {
	I1209 04:26:52.662318 1187425 command_runner.go:130] >   "images":  [
	I1209 04:26:52.662323 1187425 command_runner.go:130] >     {
	I1209 04:26:52.662332 1187425 command_runner.go:130] >       "id":  "sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1209 04:26:52.662349 1187425 command_runner.go:130] >       "repoTags":  [
	I1209 04:26:52.662355 1187425 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1209 04:26:52.662358 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.662363 1187425 command_runner.go:130] >       "repoDigests":  [
	I1209 04:26:52.662375 1187425 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"
	I1209 04:26:52.662379 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.662383 1187425 command_runner.go:130] >       "size":  "40636774",
	I1209 04:26:52.662388 1187425 command_runner.go:130] >       "username":  "",
	I1209 04:26:52.662392 1187425 command_runner.go:130] >       "pinned":  false
	I1209 04:26:52.662395 1187425 command_runner.go:130] >     },
	I1209 04:26:52.662398 1187425 command_runner.go:130] >     {
	I1209 04:26:52.662406 1187425 command_runner.go:130] >       "id":  "sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1209 04:26:52.662410 1187425 command_runner.go:130] >       "repoTags":  [
	I1209 04:26:52.662416 1187425 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1209 04:26:52.662420 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.662424 1187425 command_runner.go:130] >       "repoDigests":  [
	I1209 04:26:52.662436 1187425 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1209 04:26:52.662440 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.662444 1187425 command_runner.go:130] >       "size":  "8034419",
	I1209 04:26:52.662448 1187425 command_runner.go:130] >       "username":  "",
	I1209 04:26:52.662452 1187425 command_runner.go:130] >       "pinned":  false
	I1209 04:26:52.662460 1187425 command_runner.go:130] >     },
	I1209 04:26:52.662463 1187425 command_runner.go:130] >     {
	I1209 04:26:52.662470 1187425 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1209 04:26:52.662474 1187425 command_runner.go:130] >       "repoTags":  [
	I1209 04:26:52.662479 1187425 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1209 04:26:52.662482 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.662488 1187425 command_runner.go:130] >       "repoDigests":  [
	I1209 04:26:52.662496 1187425 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"
	I1209 04:26:52.662500 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.662504 1187425 command_runner.go:130] >       "size":  "21168808",
	I1209 04:26:52.662508 1187425 command_runner.go:130] >       "username":  "nonroot",
	I1209 04:26:52.662512 1187425 command_runner.go:130] >       "pinned":  false
	I1209 04:26:52.662515 1187425 command_runner.go:130] >     },
	I1209 04:26:52.662519 1187425 command_runner.go:130] >     {
	I1209 04:26:52.662525 1187425 command_runner.go:130] >       "id":  "sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1209 04:26:52.662529 1187425 command_runner.go:130] >       "repoTags":  [
	I1209 04:26:52.662534 1187425 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1209 04:26:52.662538 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.662541 1187425 command_runner.go:130] >       "repoDigests":  [
	I1209 04:26:52.662549 1187425 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"
	I1209 04:26:52.662552 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.662556 1187425 command_runner.go:130] >       "size":  "21136588",
	I1209 04:26:52.662561 1187425 command_runner.go:130] >       "uid":  {
	I1209 04:26:52.662565 1187425 command_runner.go:130] >         "value":  "0"
	I1209 04:26:52.662568 1187425 command_runner.go:130] >       },
	I1209 04:26:52.662572 1187425 command_runner.go:130] >       "username":  "",
	I1209 04:26:52.662576 1187425 command_runner.go:130] >       "pinned":  false
	I1209 04:26:52.662579 1187425 command_runner.go:130] >     },
	I1209 04:26:52.662585 1187425 command_runner.go:130] >     {
	I1209 04:26:52.662592 1187425 command_runner.go:130] >       "id":  "sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1209 04:26:52.662596 1187425 command_runner.go:130] >       "repoTags":  [
	I1209 04:26:52.662601 1187425 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1209 04:26:52.662605 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.662609 1187425 command_runner.go:130] >       "repoDigests":  [
	I1209 04:26:52.662617 1187425 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58"
	I1209 04:26:52.662619 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.662624 1187425 command_runner.go:130] >       "size":  "24678359",
	I1209 04:26:52.662627 1187425 command_runner.go:130] >       "uid":  {
	I1209 04:26:52.662639 1187425 command_runner.go:130] >         "value":  "0"
	I1209 04:26:52.662642 1187425 command_runner.go:130] >       },
	I1209 04:26:52.662646 1187425 command_runner.go:130] >       "username":  "",
	I1209 04:26:52.662650 1187425 command_runner.go:130] >       "pinned":  false
	I1209 04:26:52.662653 1187425 command_runner.go:130] >     },
	I1209 04:26:52.662656 1187425 command_runner.go:130] >     {
	I1209 04:26:52.662663 1187425 command_runner.go:130] >       "id":  "sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1209 04:26:52.662667 1187425 command_runner.go:130] >       "repoTags":  [
	I1209 04:26:52.662672 1187425 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1209 04:26:52.662675 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.662679 1187425 command_runner.go:130] >       "repoDigests":  [
	I1209 04:26:52.662687 1187425 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d"
	I1209 04:26:52.662690 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.662694 1187425 command_runner.go:130] >       "size":  "20661043",
	I1209 04:26:52.662697 1187425 command_runner.go:130] >       "uid":  {
	I1209 04:26:52.662701 1187425 command_runner.go:130] >         "value":  "0"
	I1209 04:26:52.662704 1187425 command_runner.go:130] >       },
	I1209 04:26:52.662707 1187425 command_runner.go:130] >       "username":  "",
	I1209 04:26:52.662712 1187425 command_runner.go:130] >       "pinned":  false
	I1209 04:26:52.662714 1187425 command_runner.go:130] >     },
	I1209 04:26:52.662717 1187425 command_runner.go:130] >     {
	I1209 04:26:52.662725 1187425 command_runner.go:130] >       "id":  "sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1209 04:26:52.662729 1187425 command_runner.go:130] >       "repoTags":  [
	I1209 04:26:52.662737 1187425 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1209 04:26:52.662741 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.662744 1187425 command_runner.go:130] >       "repoDigests":  [
	I1209 04:26:52.662752 1187425 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1209 04:26:52.662755 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.662759 1187425 command_runner.go:130] >       "size":  "22429671",
	I1209 04:26:52.662763 1187425 command_runner.go:130] >       "username":  "",
	I1209 04:26:52.662767 1187425 command_runner.go:130] >       "pinned":  false
	I1209 04:26:52.662770 1187425 command_runner.go:130] >     },
	I1209 04:26:52.662774 1187425 command_runner.go:130] >     {
	I1209 04:26:52.662781 1187425 command_runner.go:130] >       "id":  "sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1209 04:26:52.662785 1187425 command_runner.go:130] >       "repoTags":  [
	I1209 04:26:52.662791 1187425 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1209 04:26:52.662794 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.662798 1187425 command_runner.go:130] >       "repoDigests":  [
	I1209 04:26:52.662805 1187425 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6"
	I1209 04:26:52.662808 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.662813 1187425 command_runner.go:130] >       "size":  "15391364",
	I1209 04:26:52.662816 1187425 command_runner.go:130] >       "uid":  {
	I1209 04:26:52.662820 1187425 command_runner.go:130] >         "value":  "0"
	I1209 04:26:52.662823 1187425 command_runner.go:130] >       },
	I1209 04:26:52.662827 1187425 command_runner.go:130] >       "username":  "",
	I1209 04:26:52.662831 1187425 command_runner.go:130] >       "pinned":  false
	I1209 04:26:52.662834 1187425 command_runner.go:130] >     },
	I1209 04:26:52.662837 1187425 command_runner.go:130] >     {
	I1209 04:26:52.662843 1187425 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1209 04:26:52.662847 1187425 command_runner.go:130] >       "repoTags":  [
	I1209 04:26:52.662852 1187425 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1209 04:26:52.662855 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.662858 1187425 command_runner.go:130] >       "repoDigests":  [
	I1209 04:26:52.662866 1187425 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"
	I1209 04:26:52.662869 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.662873 1187425 command_runner.go:130] >       "size":  "267939",
	I1209 04:26:52.662881 1187425 command_runner.go:130] >       "uid":  {
	I1209 04:26:52.662886 1187425 command_runner.go:130] >         "value":  "65535"
	I1209 04:26:52.662890 1187425 command_runner.go:130] >       },
	I1209 04:26:52.662894 1187425 command_runner.go:130] >       "username":  "",
	I1209 04:26:52.662897 1187425 command_runner.go:130] >       "pinned":  true
	I1209 04:26:52.662900 1187425 command_runner.go:130] >     }
	I1209 04:26:52.662903 1187425 command_runner.go:130] >   ]
	I1209 04:26:52.662906 1187425 command_runner.go:130] > }
	I1209 04:26:52.665193 1187425 containerd.go:627] all images are preloaded for containerd runtime.
	I1209 04:26:52.665212 1187425 cache_images.go:86] Images are preloaded, skipping loading
	I1209 04:26:52.665219 1187425 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
	I1209 04:26:52.665322 1187425 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-667319 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-667319 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 04:26:52.665384 1187425 ssh_runner.go:195] Run: sudo crictl info
	I1209 04:26:52.686718 1187425 command_runner.go:130] > {
	I1209 04:26:52.686786 1187425 command_runner.go:130] >   "cniconfig": {
	I1209 04:26:52.686805 1187425 command_runner.go:130] >     "Networks": [
	I1209 04:26:52.686825 1187425 command_runner.go:130] >       {
	I1209 04:26:52.686864 1187425 command_runner.go:130] >         "Config": {
	I1209 04:26:52.686886 1187425 command_runner.go:130] >           "CNIVersion": "0.3.1",
	I1209 04:26:52.686905 1187425 command_runner.go:130] >           "Name": "cni-loopback",
	I1209 04:26:52.686923 1187425 command_runner.go:130] >           "Plugins": [
	I1209 04:26:52.686940 1187425 command_runner.go:130] >             {
	I1209 04:26:52.686967 1187425 command_runner.go:130] >               "Network": {
	I1209 04:26:52.686991 1187425 command_runner.go:130] >                 "ipam": {},
	I1209 04:26:52.687011 1187425 command_runner.go:130] >                 "type": "loopback"
	I1209 04:26:52.687028 1187425 command_runner.go:130] >               },
	I1209 04:26:52.687048 1187425 command_runner.go:130] >               "Source": "{\"type\":\"loopback\"}"
	I1209 04:26:52.687074 1187425 command_runner.go:130] >             }
	I1209 04:26:52.687097 1187425 command_runner.go:130] >           ],
	I1209 04:26:52.687120 1187425 command_runner.go:130] >           "Source": "{\n\"cniVersion\": \"0.3.1\",\n\"name\": \"cni-loopback\",\n\"plugins\": [{\n  \"type\": \"loopback\"\n}]\n}"
	I1209 04:26:52.687138 1187425 command_runner.go:130] >         },
	I1209 04:26:52.687160 1187425 command_runner.go:130] >         "IFName": "lo"
	I1209 04:26:52.687191 1187425 command_runner.go:130] >       }
	I1209 04:26:52.687207 1187425 command_runner.go:130] >     ],
	I1209 04:26:52.687225 1187425 command_runner.go:130] >     "PluginConfDir": "/etc/cni/net.d",
	I1209 04:26:52.687243 1187425 command_runner.go:130] >     "PluginDirs": [
	I1209 04:26:52.687272 1187425 command_runner.go:130] >       "/opt/cni/bin"
	I1209 04:26:52.687293 1187425 command_runner.go:130] >     ],
	I1209 04:26:52.687317 1187425 command_runner.go:130] >     "PluginMaxConfNum": 1,
	I1209 04:26:52.687334 1187425 command_runner.go:130] >     "Prefix": "eth"
	I1209 04:26:52.687351 1187425 command_runner.go:130] >   },
	I1209 04:26:52.687378 1187425 command_runner.go:130] >   "config": {
	I1209 04:26:52.687401 1187425 command_runner.go:130] >     "cdiSpecDirs": [
	I1209 04:26:52.687418 1187425 command_runner.go:130] >       "/etc/cdi",
	I1209 04:26:52.687438 1187425 command_runner.go:130] >       "/var/run/cdi"
	I1209 04:26:52.687457 1187425 command_runner.go:130] >     ],
	I1209 04:26:52.687483 1187425 command_runner.go:130] >     "cni": {
	I1209 04:26:52.687505 1187425 command_runner.go:130] >       "binDir": "",
	I1209 04:26:52.687560 1187425 command_runner.go:130] >       "binDirs": [
	I1209 04:26:52.687588 1187425 command_runner.go:130] >         "/opt/cni/bin"
	I1209 04:26:52.687609 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.687628 1187425 command_runner.go:130] >       "confDir": "/etc/cni/net.d",
	I1209 04:26:52.687646 1187425 command_runner.go:130] >       "confTemplate": "",
	I1209 04:26:52.687665 1187425 command_runner.go:130] >       "ipPref": "",
	I1209 04:26:52.687692 1187425 command_runner.go:130] >       "maxConfNum": 1,
	I1209 04:26:52.687715 1187425 command_runner.go:130] >       "setupSerially": false,
	I1209 04:26:52.687733 1187425 command_runner.go:130] >       "useInternalLoopback": false
	I1209 04:26:52.687749 1187425 command_runner.go:130] >     },
	I1209 04:26:52.687775 1187425 command_runner.go:130] >     "containerd": {
	I1209 04:26:52.687802 1187425 command_runner.go:130] >       "defaultRuntimeName": "runc",
	I1209 04:26:52.687825 1187425 command_runner.go:130] >       "ignoreBlockIONotEnabledErrors": false,
	I1209 04:26:52.687845 1187425 command_runner.go:130] >       "ignoreRdtNotEnabledErrors": false,
	I1209 04:26:52.687861 1187425 command_runner.go:130] >       "runtimes": {
	I1209 04:26:52.687878 1187425 command_runner.go:130] >         "runc": {
	I1209 04:26:52.687905 1187425 command_runner.go:130] >           "ContainerAnnotations": null,
	I1209 04:26:52.687929 1187425 command_runner.go:130] >           "PodAnnotations": null,
	I1209 04:26:52.687948 1187425 command_runner.go:130] >           "baseRuntimeSpec": "",
	I1209 04:26:52.687965 1187425 command_runner.go:130] >           "cgroupWritable": false,
	I1209 04:26:52.687982 1187425 command_runner.go:130] >           "cniConfDir": "",
	I1209 04:26:52.688009 1187425 command_runner.go:130] >           "cniMaxConfNum": 0,
	I1209 04:26:52.688042 1187425 command_runner.go:130] >           "io_type": "",
	I1209 04:26:52.688055 1187425 command_runner.go:130] >           "options": {
	I1209 04:26:52.688060 1187425 command_runner.go:130] >             "BinaryName": "",
	I1209 04:26:52.688065 1187425 command_runner.go:130] >             "CriuImagePath": "",
	I1209 04:26:52.688070 1187425 command_runner.go:130] >             "CriuWorkPath": "",
	I1209 04:26:52.688078 1187425 command_runner.go:130] >             "IoGid": 0,
	I1209 04:26:52.688082 1187425 command_runner.go:130] >             "IoUid": 0,
	I1209 04:26:52.688086 1187425 command_runner.go:130] >             "NoNewKeyring": false,
	I1209 04:26:52.688093 1187425 command_runner.go:130] >             "Root": "",
	I1209 04:26:52.688097 1187425 command_runner.go:130] >             "ShimCgroup": "",
	I1209 04:26:52.688109 1187425 command_runner.go:130] >             "SystemdCgroup": false
	I1209 04:26:52.688113 1187425 command_runner.go:130] >           },
	I1209 04:26:52.688118 1187425 command_runner.go:130] >           "privileged_without_host_devices": false,
	I1209 04:26:52.688128 1187425 command_runner.go:130] >           "privileged_without_host_devices_all_devices_allowed": false,
	I1209 04:26:52.688138 1187425 command_runner.go:130] >           "runtimePath": "",
	I1209 04:26:52.688145 1187425 command_runner.go:130] >           "runtimeType": "io.containerd.runc.v2",
	I1209 04:26:52.688153 1187425 command_runner.go:130] >           "sandboxer": "podsandbox",
	I1209 04:26:52.688157 1187425 command_runner.go:130] >           "snapshotter": ""
	I1209 04:26:52.688161 1187425 command_runner.go:130] >         }
	I1209 04:26:52.688164 1187425 command_runner.go:130] >       }
	I1209 04:26:52.688167 1187425 command_runner.go:130] >     },
	I1209 04:26:52.688181 1187425 command_runner.go:130] >     "containerdEndpoint": "/run/containerd/containerd.sock",
	I1209 04:26:52.688190 1187425 command_runner.go:130] >     "containerdRootDir": "/var/lib/containerd",
	I1209 04:26:52.688198 1187425 command_runner.go:130] >     "device_ownership_from_security_context": false,
	I1209 04:26:52.688205 1187425 command_runner.go:130] >     "disableApparmor": false,
	I1209 04:26:52.688210 1187425 command_runner.go:130] >     "disableHugetlbController": true,
	I1209 04:26:52.688218 1187425 command_runner.go:130] >     "disableProcMount": false,
	I1209 04:26:52.688223 1187425 command_runner.go:130] >     "drainExecSyncIOTimeout": "0s",
	I1209 04:26:52.688231 1187425 command_runner.go:130] >     "enableCDI": true,
	I1209 04:26:52.688235 1187425 command_runner.go:130] >     "enableSelinux": false,
	I1209 04:26:52.688240 1187425 command_runner.go:130] >     "enableUnprivilegedICMP": true,
	I1209 04:26:52.688248 1187425 command_runner.go:130] >     "enableUnprivilegedPorts": true,
	I1209 04:26:52.688253 1187425 command_runner.go:130] >     "ignoreDeprecationWarnings": null,
	I1209 04:26:52.688259 1187425 command_runner.go:130] >     "ignoreImageDefinedVolumes": false,
	I1209 04:26:52.688269 1187425 command_runner.go:130] >     "maxContainerLogLineSize": 16384,
	I1209 04:26:52.688278 1187425 command_runner.go:130] >     "netnsMountsUnderStateDir": false,
	I1209 04:26:52.688282 1187425 command_runner.go:130] >     "restrictOOMScoreAdj": false,
	I1209 04:26:52.688293 1187425 command_runner.go:130] >     "rootDir": "/var/lib/containerd/io.containerd.grpc.v1.cri",
	I1209 04:26:52.688297 1187425 command_runner.go:130] >     "selinuxCategoryRange": 1024,
	I1209 04:26:52.688306 1187425 command_runner.go:130] >     "stateDir": "/run/containerd/io.containerd.grpc.v1.cri",
	I1209 04:26:52.688312 1187425 command_runner.go:130] >     "tolerateMissingHugetlbController": true,
	I1209 04:26:52.688320 1187425 command_runner.go:130] >     "unsetSeccompProfile": ""
	I1209 04:26:52.688323 1187425 command_runner.go:130] >   },
	I1209 04:26:52.688327 1187425 command_runner.go:130] >   "features": {
	I1209 04:26:52.688332 1187425 command_runner.go:130] >     "supplemental_groups_policy": true
	I1209 04:26:52.688337 1187425 command_runner.go:130] >   },
	I1209 04:26:52.688341 1187425 command_runner.go:130] >   "golang": "go1.24.9",
	I1209 04:26:52.688355 1187425 command_runner.go:130] >   "lastCNILoadStatus": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1209 04:26:52.688368 1187425 command_runner.go:130] >   "lastCNILoadStatus.default": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1209 04:26:52.688376 1187425 command_runner.go:130] >   "runtimeHandlers": [
	I1209 04:26:52.688379 1187425 command_runner.go:130] >     {
	I1209 04:26:52.688388 1187425 command_runner.go:130] >       "features": {
	I1209 04:26:52.688394 1187425 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1209 04:26:52.688403 1187425 command_runner.go:130] >         "user_namespaces": true
	I1209 04:26:52.688406 1187425 command_runner.go:130] >       }
	I1209 04:26:52.688409 1187425 command_runner.go:130] >     },
	I1209 04:26:52.688412 1187425 command_runner.go:130] >     {
	I1209 04:26:52.688416 1187425 command_runner.go:130] >       "features": {
	I1209 04:26:52.688423 1187425 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1209 04:26:52.688432 1187425 command_runner.go:130] >         "user_namespaces": true
	I1209 04:26:52.688435 1187425 command_runner.go:130] >       },
	I1209 04:26:52.688439 1187425 command_runner.go:130] >       "name": "runc"
	I1209 04:26:52.688446 1187425 command_runner.go:130] >     }
	I1209 04:26:52.688449 1187425 command_runner.go:130] >   ],
	I1209 04:26:52.688457 1187425 command_runner.go:130] >   "status": {
	I1209 04:26:52.688461 1187425 command_runner.go:130] >     "conditions": [
	I1209 04:26:52.688469 1187425 command_runner.go:130] >       {
	I1209 04:26:52.688476 1187425 command_runner.go:130] >         "message": "",
	I1209 04:26:52.688484 1187425 command_runner.go:130] >         "reason": "",
	I1209 04:26:52.688488 1187425 command_runner.go:130] >         "status": true,
	I1209 04:26:52.688493 1187425 command_runner.go:130] >         "type": "RuntimeReady"
	I1209 04:26:52.688497 1187425 command_runner.go:130] >       },
	I1209 04:26:52.688502 1187425 command_runner.go:130] >       {
	I1209 04:26:52.688509 1187425 command_runner.go:130] >         "message": "Network plugin returns error: cni plugin not initialized",
	I1209 04:26:52.688518 1187425 command_runner.go:130] >         "reason": "NetworkPluginNotReady",
	I1209 04:26:52.688522 1187425 command_runner.go:130] >         "status": false,
	I1209 04:26:52.688530 1187425 command_runner.go:130] >         "type": "NetworkReady"
	I1209 04:26:52.688534 1187425 command_runner.go:130] >       },
	I1209 04:26:52.688541 1187425 command_runner.go:130] >       {
	I1209 04:26:52.688568 1187425 command_runner.go:130] >         "message": "{\"io.containerd.deprecation/cgroup-v1\":\"The support for cgroup v1 is deprecated since containerd v2.2 and will be removed by no later than May 2029. Upgrade the host to use cgroup v2.\"}",
	I1209 04:26:52.688578 1187425 command_runner.go:130] >         "reason": "ContainerdHasDeprecationWarnings",
	I1209 04:26:52.688584 1187425 command_runner.go:130] >         "status": false,
	I1209 04:26:52.688590 1187425 command_runner.go:130] >         "type": "ContainerdHasNoDeprecationWarnings"
	I1209 04:26:52.688595 1187425 command_runner.go:130] >       }
	I1209 04:26:52.688598 1187425 command_runner.go:130] >     ]
	I1209 04:26:52.688606 1187425 command_runner.go:130] >   }
	I1209 04:26:52.688609 1187425 command_runner.go:130] > }
	I1209 04:26:52.690920 1187425 cni.go:84] Creating CNI manager for ""
	I1209 04:26:52.690942 1187425 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1209 04:26:52.690965 1187425 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1209 04:26:52.690987 1187425 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-667319 NodeName:functional-667319 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1209 04:26:52.691101 1187425 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-667319"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 04:26:52.691179 1187425 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1209 04:26:52.697985 1187425 command_runner.go:130] > kubeadm
	I1209 04:26:52.698006 1187425 command_runner.go:130] > kubectl
	I1209 04:26:52.698010 1187425 command_runner.go:130] > kubelet
	I1209 04:26:52.698825 1187425 binaries.go:51] Found k8s binaries, skipping transfer
	I1209 04:26:52.698896 1187425 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1209 04:26:52.706638 1187425 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1209 04:26:52.718822 1187425 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1209 04:26:52.731825 1187425 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1209 04:26:52.744962 1187425 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1209 04:26:52.748733 1187425 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1209 04:26:52.748987 1187425 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 04:26:52.855986 1187425 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 04:26:53.181367 1187425 certs.go:69] Setting up /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319 for IP: 192.168.49.2
	I1209 04:26:53.181392 1187425 certs.go:195] generating shared ca certs ...
	I1209 04:26:53.181408 1187425 certs.go:227] acquiring lock for ca certs: {Name:mk15788702f8c4e23b5aeab3f44961d296fab259 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 04:26:53.181570 1187425 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.key
	I1209 04:26:53.181618 1187425 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/proxy-client-ca.key
	I1209 04:26:53.181630 1187425 certs.go:257] generating profile certs ...
	I1209 04:26:53.181740 1187425 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/client.key
	I1209 04:26:53.181805 1187425 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/apiserver.key.c80eb595
	I1209 04:26:53.181848 1187425 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/proxy-client.key
	I1209 04:26:53.181859 1187425 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1209 04:26:53.181873 1187425 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1209 04:26:53.181889 1187425 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1142328/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1209 04:26:53.181899 1187425 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1142328/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1209 04:26:53.181914 1187425 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1209 04:26:53.181925 1187425 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1209 04:26:53.181943 1187425 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1209 04:26:53.181954 1187425 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1209 04:26:53.182004 1187425 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/1144231.pem (1338 bytes)
	W1209 04:26:53.182038 1187425 certs.go:480] ignoring /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/1144231_empty.pem, impossibly tiny 0 bytes
	I1209 04:26:53.182050 1187425 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca-key.pem (1679 bytes)
	I1209 04:26:53.182079 1187425 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem (1078 bytes)
	I1209 04:26:53.182105 1187425 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/cert.pem (1123 bytes)
	I1209 04:26:53.182136 1187425 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/key.pem (1675 bytes)
	I1209 04:26:53.182187 1187425 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/ssl/certs/11442312.pem (1708 bytes)
	I1209 04:26:53.182243 1187425 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/ssl/certs/11442312.pem -> /usr/share/ca-certificates/11442312.pem
	I1209 04:26:53.182260 1187425 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1209 04:26:53.182277 1187425 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/1144231.pem -> /usr/share/ca-certificates/1144231.pem
	I1209 04:26:53.182817 1187425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 04:26:53.202751 1187425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1209 04:26:53.220083 1187425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 04:26:53.237728 1187425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1209 04:26:53.255002 1187425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1209 04:26:53.271923 1187425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1209 04:26:53.289401 1187425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 04:26:53.306616 1187425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1209 04:26:53.323564 1187425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/ssl/certs/11442312.pem --> /usr/share/ca-certificates/11442312.pem (1708 bytes)
	I1209 04:26:53.340526 1187425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 04:26:53.357221 1187425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/1144231.pem --> /usr/share/ca-certificates/1144231.pem (1338 bytes)
	I1209 04:26:53.373705 1187425 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 04:26:53.386274 1187425 ssh_runner.go:195] Run: openssl version
	I1209 04:26:53.391826 1187425 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1209 04:26:53.392252 1187425 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/11442312.pem
	I1209 04:26:53.399306 1187425 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/11442312.pem /etc/ssl/certs/11442312.pem
	I1209 04:26:53.406404 1187425 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11442312.pem
	I1209 04:26:53.409862 1187425 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec  9 04:18 /usr/share/ca-certificates/11442312.pem
	I1209 04:26:53.409914 1187425 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 04:18 /usr/share/ca-certificates/11442312.pem
	I1209 04:26:53.409972 1187425 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11442312.pem
	I1209 04:26:53.450109 1187425 command_runner.go:130] > 3ec20f2e
	I1209 04:26:53.450580 1187425 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1209 04:26:53.457724 1187425 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1209 04:26:53.464857 1187425 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1209 04:26:53.472136 1187425 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 04:26:53.475789 1187425 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec  9 04:09 /usr/share/ca-certificates/minikubeCA.pem
	I1209 04:26:53.475830 1187425 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 04:09 /usr/share/ca-certificates/minikubeCA.pem
	I1209 04:26:53.475880 1187425 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 04:26:53.517012 1187425 command_runner.go:130] > b5213941
	I1209 04:26:53.517090 1187425 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1209 04:26:53.524195 1187425 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1144231.pem
	I1209 04:26:53.531059 1187425 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1144231.pem /etc/ssl/certs/1144231.pem
	I1209 04:26:53.537929 1187425 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1144231.pem
	I1209 04:26:53.541362 1187425 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec  9 04:18 /usr/share/ca-certificates/1144231.pem
	I1209 04:26:53.541587 1187425 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 04:18 /usr/share/ca-certificates/1144231.pem
	I1209 04:26:53.541670 1187425 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1144231.pem
	I1209 04:26:53.586134 1187425 command_runner.go:130] > 51391683
	I1209 04:26:53.586694 1187425 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1209 04:26:53.593775 1187425 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 04:26:53.597060 1187425 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 04:26:53.597083 1187425 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1209 04:26:53.597090 1187425 command_runner.go:130] > Device: 259,1	Inode: 1317519     Links: 1
	I1209 04:26:53.597096 1187425 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1209 04:26:53.597101 1187425 command_runner.go:130] > Access: 2025-12-09 04:22:46.557738038 +0000
	I1209 04:26:53.597107 1187425 command_runner.go:130] > Modify: 2025-12-09 04:18:42.397294101 +0000
	I1209 04:26:53.597112 1187425 command_runner.go:130] > Change: 2025-12-09 04:18:42.397294101 +0000
	I1209 04:26:53.597120 1187425 command_runner.go:130] >  Birth: 2025-12-09 04:18:42.397294101 +0000
	I1209 04:26:53.597202 1187425 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1209 04:26:53.637326 1187425 command_runner.go:130] > Certificate will not expire
	I1209 04:26:53.637892 1187425 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1209 04:26:53.678262 1187425 command_runner.go:130] > Certificate will not expire
	I1209 04:26:53.678829 1187425 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1209 04:26:53.719319 1187425 command_runner.go:130] > Certificate will not expire
	I1209 04:26:53.719397 1187425 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1209 04:26:53.760102 1187425 command_runner.go:130] > Certificate will not expire
	I1209 04:26:53.760184 1187425 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1209 04:26:53.805340 1187425 command_runner.go:130] > Certificate will not expire
	I1209 04:26:53.805854 1187425 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1209 04:26:53.846216 1187425 command_runner.go:130] > Certificate will not expire
	I1209 04:26:53.846284 1187425 kubeadm.go:401] StartCluster: {Name:functional-667319 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-667319 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 04:26:53.846701 1187425 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1209 04:26:53.846774 1187425 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 04:26:53.877891 1187425 cri.go:89] found id: ""
	I1209 04:26:53.877982 1187425 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1209 04:26:53.884657 1187425 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1209 04:26:53.884683 1187425 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1209 04:26:53.884690 1187425 command_runner.go:130] > /var/lib/minikube/etcd:
	I1209 04:26:53.885556 1187425 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1209 04:26:53.885572 1187425 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1209 04:26:53.885646 1187425 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1209 04:26:53.892789 1187425 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1209 04:26:53.893171 1187425 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-667319" does not appear in /home/jenkins/minikube-integration/22081-1142328/kubeconfig
	I1209 04:26:53.893275 1187425 kubeconfig.go:62] /home/jenkins/minikube-integration/22081-1142328/kubeconfig needs updating (will repair): [kubeconfig missing "functional-667319" cluster setting kubeconfig missing "functional-667319" context setting]
	I1209 04:26:53.893568 1187425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1142328/kubeconfig: {Name:mk0dc127429f88dc4fdfb2d110deebc58207c1b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 04:26:53.893971 1187425 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22081-1142328/kubeconfig
	I1209 04:26:53.894121 1187425 kapi.go:59] client config for functional-667319: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/client.crt", KeyFile:"/home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/client.key", CAFile:"/home/jenkins/minikube-integration/22081-1142328/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb3ec0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1209 04:26:53.894601 1187425 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1209 04:26:53.894621 1187425 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1209 04:26:53.894627 1187425 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1209 04:26:53.894636 1187425 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1209 04:26:53.894643 1187425 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1209 04:26:53.894942 1187425 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1209 04:26:53.895030 1187425 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1209 04:26:53.902229 1187425 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1209 04:26:53.902301 1187425 kubeadm.go:602] duration metric: took 16.713333ms to restartPrimaryControlPlane
	I1209 04:26:53.902316 1187425 kubeadm.go:403] duration metric: took 56.036306ms to StartCluster
	I1209 04:26:53.902333 1187425 settings.go:142] acquiring lock: {Name:mk8fa744e3d74bf8a1cbf5ac275c9f1969ad91a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 04:26:53.902398 1187425 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22081-1142328/kubeconfig
	I1209 04:26:53.902993 1187425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1142328/kubeconfig: {Name:mk0dc127429f88dc4fdfb2d110deebc58207c1b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 04:26:53.903190 1187425 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1209 04:26:53.903521 1187425 config.go:182] Loaded profile config "functional-667319": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1209 04:26:53.903568 1187425 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1209 04:26:53.903630 1187425 addons.go:70] Setting storage-provisioner=true in profile "functional-667319"
	I1209 04:26:53.903643 1187425 addons.go:239] Setting addon storage-provisioner=true in "functional-667319"
	I1209 04:26:53.903675 1187425 host.go:66] Checking if "functional-667319" exists ...
	I1209 04:26:53.904120 1187425 addons.go:70] Setting default-storageclass=true in profile "functional-667319"
	I1209 04:26:53.904144 1187425 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-667319"
	I1209 04:26:53.904441 1187425 cli_runner.go:164] Run: docker container inspect functional-667319 --format={{.State.Status}}
	I1209 04:26:53.904640 1187425 cli_runner.go:164] Run: docker container inspect functional-667319 --format={{.State.Status}}
	I1209 04:26:53.910201 1187425 out.go:179] * Verifying Kubernetes components...
	I1209 04:26:53.913884 1187425 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 04:26:53.930099 1187425 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 04:26:53.932721 1187425 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22081-1142328/kubeconfig
	I1209 04:26:53.932880 1187425 kapi.go:59] client config for functional-667319: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/client.crt", KeyFile:"/home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/client.key", CAFile:"/home/jenkins/minikube-integration/22081-1142328/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb3ec0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1209 04:26:53.933092 1187425 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:26:53.933105 1187425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1209 04:26:53.933155 1187425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-667319
	I1209 04:26:53.933672 1187425 addons.go:239] Setting addon default-storageclass=true in "functional-667319"
	I1209 04:26:53.933726 1187425 host.go:66] Checking if "functional-667319" exists ...
	I1209 04:26:53.934157 1187425 cli_runner.go:164] Run: docker container inspect functional-667319 --format={{.State.Status}}
	I1209 04:26:53.980209 1187425 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/functional-667319/id_rsa Username:docker}
	I1209 04:26:53.991515 1187425 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1209 04:26:53.991543 1187425 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1209 04:26:53.991606 1187425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-667319
	I1209 04:26:54.014988 1187425 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/functional-667319/id_rsa Username:docker}
	I1209 04:26:54.109673 1187425 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 04:26:54.172299 1187425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1209 04:26:54.172446 1187425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:26:54.932432 1187425 node_ready.go:35] waiting up to 6m0s for node "functional-667319" to be "Ready" ...
	I1209 04:26:54.932477 1187425 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:26:54.932512 1187425 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:26:54.932537 1187425 retry.go:31] will retry after 239.582285ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:26:54.932571 1187425 type.go:168] "Request Body" body=""
	I1209 04:26:54.932584 1187425 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:26:54.932596 1187425 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:26:54.932603 1187425 retry.go:31] will retry after 326.615849ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:26:54.932629 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:26:54.932908 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:26:55.173322 1187425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:26:55.233582 1187425 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:26:55.233631 1187425 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:26:55.233651 1187425 retry.go:31] will retry after 246.357107ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:26:55.259785 1187425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 04:26:55.318382 1187425 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:26:55.318469 1187425 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:26:55.318493 1187425 retry.go:31] will retry after 410.345383ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:26:55.433607 1187425 type.go:168] "Request Body" body=""
	I1209 04:26:55.433683 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:26:55.434019 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:26:55.480272 1187425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:26:55.539370 1187425 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:26:55.543073 1187425 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:26:55.543104 1187425 retry.go:31] will retry after 836.674318ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:26:55.729246 1187425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 04:26:55.790859 1187425 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:26:55.790906 1187425 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:26:55.790952 1187425 retry.go:31] will retry after 634.479833ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:26:55.933159 1187425 type.go:168] "Request Body" body=""
	I1209 04:26:55.933235 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:26:55.933592 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:26:56.380124 1187425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:26:56.425589 1187425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 04:26:56.432912 1187425 type.go:168] "Request Body" body=""
	I1209 04:26:56.433084 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:26:56.433454 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:26:56.462533 1187425 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:26:56.462616 1187425 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:26:56.462643 1187425 retry.go:31] will retry after 603.323732ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:26:56.528272 1187425 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:26:56.528318 1187425 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:26:56.528338 1187425 retry.go:31] will retry after 1.072780189s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:26:56.932753 1187425 type.go:168] "Request Body" body=""
	I1209 04:26:56.932827 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:26:56.933209 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:26:56.933265 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:26:57.066591 1187425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:26:57.132172 1187425 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:26:57.135761 1187425 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:26:57.135793 1187425 retry.go:31] will retry after 1.855495012s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:26:57.433210 1187425 type.go:168] "Request Body" body=""
	I1209 04:26:57.433286 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:26:57.433630 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:26:57.601957 1187425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 04:26:57.657995 1187425 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:26:57.658038 1187425 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:26:57.658057 1187425 retry.go:31] will retry after 1.134842328s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:26:57.933276 1187425 type.go:168] "Request Body" body=""
	I1209 04:26:57.933355 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:26:57.933644 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:26:58.433445 1187425 type.go:168] "Request Body" body=""
	I1209 04:26:58.433533 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:26:58.433853 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:26:58.793130 1187425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 04:26:58.858674 1187425 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:26:58.858714 1187425 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:26:58.858733 1187425 retry.go:31] will retry after 2.746713696s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:26:58.933078 1187425 type.go:168] "Request Body" body=""
	I1209 04:26:58.933157 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:26:58.933497 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:26:58.933557 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:26:58.991692 1187425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:26:59.049214 1187425 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:26:59.052768 1187425 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:26:59.052797 1187425 retry.go:31] will retry after 2.715253433s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:26:59.433202 1187425 type.go:168] "Request Body" body=""
	I1209 04:26:59.433383 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:26:59.433760 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:26:59.932622 1187425 type.go:168] "Request Body" body=""
	I1209 04:26:59.932706 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:26:59.933025 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:00.432716 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:00.432792 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:00.433084 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:00.932666 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:00.932767 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:00.933080 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:01.432721 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:01.432802 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:01.433155 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:27:01.433220 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:27:01.606514 1187425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 04:27:01.664108 1187425 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:27:01.667800 1187425 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:27:01.667831 1187425 retry.go:31] will retry after 3.567848129s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:27:01.769041 1187425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:27:01.828356 1187425 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:27:01.831855 1187425 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:27:01.831890 1187425 retry.go:31] will retry after 1.487712174s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:27:01.933283 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:01.933357 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:01.933696 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:02.433227 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:02.433296 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:02.433566 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:02.933365 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:02.933446 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:02.933784 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:03.320437 1187425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:27:03.380650 1187425 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:27:03.380689 1187425 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:27:03.380707 1187425 retry.go:31] will retry after 2.980491619s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:27:03.432967 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:03.433052 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:03.433335 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:27:03.433382 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:27:03.933173 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:03.933261 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:03.933564 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:04.433334 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:04.433407 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:04.433774 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:04.932608 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:04.932706 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:04.932991 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:05.236581 1187425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 04:27:05.294920 1187425 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:27:05.298256 1187425 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:27:05.298287 1187425 retry.go:31] will retry after 3.775902085s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:27:05.433544 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:05.433623 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:05.433911 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:27:05.433968 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:27:05.932633 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:05.932732 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:05.933097 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:06.361776 1187425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:27:06.423571 1187425 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:27:06.423609 1187425 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:27:06.423628 1187425 retry.go:31] will retry after 5.55631863s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:27:06.432687 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:06.432759 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:06.433064 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:06.932763 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:06.932858 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:06.933188 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:07.432720 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:07.432798 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:07.433122 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:07.932712 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:07.932788 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:07.933143 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:27:07.933270 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:27:08.432753 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:08.432826 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:08.433121 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:08.932708 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:08.932789 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:08.933114 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:09.074480 1187425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 04:27:09.131213 1187425 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:27:09.134642 1187425 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:27:09.134677 1187425 retry.go:31] will retry after 3.336397846s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:27:09.433063 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:09.433136 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:09.433477 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:09.933147 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:09.933243 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:09.933515 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:27:09.933565 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:27:10.433463 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:10.433543 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:10.433860 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:10.933720 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:10.933792 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:10.934110 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:11.432758 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:11.432831 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:11.433103 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:11.932702 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:11.932775 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:11.933127 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:11.980489 1187425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:27:12.042917 1187425 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:27:12.047245 1187425 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:27:12.047276 1187425 retry.go:31] will retry after 4.846358398s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:27:12.432667 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:12.432737 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:12.433027 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:27:12.433074 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:27:12.471387 1187425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 04:27:12.533451 1187425 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:27:12.533488 1187425 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:27:12.533508 1187425 retry.go:31] will retry after 12.396608004s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:27:12.932956 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:12.933031 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:12.933353 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:13.432721 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:13.432794 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:13.433126 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:13.932935 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:13.933007 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:13.933342 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:14.432734 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:14.432802 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:14.433056 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:27:14.433098 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:27:14.932653 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:14.932768 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:14.933061 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:15.432698 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:15.432796 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:15.433182 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:15.932668 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:15.932746 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:15.933050 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:16.432712 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:16.432788 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:16.433123 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:27:16.433176 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:27:16.894794 1187425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:27:16.933270 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:16.933350 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:16.933633 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:16.956237 1187425 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:27:16.956277 1187425 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:27:16.956299 1187425 retry.go:31] will retry after 11.708634593s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:27:17.432723 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:17.432798 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:17.433065 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:17.932740 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:17.932815 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:17.933136 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:18.432860 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:18.432932 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:18.433214 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:27:18.433267 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:27:18.932660 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:18.932728 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:18.933009 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:19.432696 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:19.432772 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:19.433147 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:19.932674 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:19.932750 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:19.933101 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:20.432907 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:20.432984 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:20.433236 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:20.932684 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:20.932760 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:20.933100 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:27:20.933152 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:27:21.432797 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:21.432871 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:21.433197 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:21.932637 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:21.932726 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:21.932993 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:22.432707 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:22.432782 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:22.433117 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:22.932841 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:22.932917 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:22.933234 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:27:22.933291 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:27:23.432668 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:23.432751 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:23.433027 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:23.932873 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:23.932948 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:23.933315 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:24.432680 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:24.432753 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:24.433071 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:24.930697 1187425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 04:27:24.933014 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:24.933088 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:24.933320 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:27:24.933369 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:27:25.005568 1187425 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:27:25.005627 1187425 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:27:25.005648 1187425 retry.go:31] will retry after 8.82909482s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:27:25.433152 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:25.433233 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:25.433532 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:25.932972 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:25.933044 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:25.933358 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:26.432756 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:26.432830 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:26.433108 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:26.932726 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:26.932803 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:26.933099 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:27.432693 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:27.432765 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:27.433082 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:27:27.433136 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:27:27.932636 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:27.932712 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:27.933033 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:28.432693 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:28.432767 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:28.433092 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:28.665515 1187425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:27:28.738878 1187425 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:27:28.745399 1187425 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:27:28.745439 1187425 retry.go:31] will retry after 17.60519501s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:27:28.932773 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:28.932863 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:28.933172 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:29.432651 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:29.432722 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:29.432984 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:29.932660 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:29.932732 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:29.933044 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:27:29.933094 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:27:30.432735 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:30.432809 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:30.433166 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:30.932654 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:30.932753 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:30.933041 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:31.432690 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:31.432771 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:31.433110 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:31.932741 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:31.932815 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:31.933152 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:27:31.933206 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:27:32.432841 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:32.432914 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:32.433177 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:32.932689 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:32.932763 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:32.933056 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:33.432759 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:33.432858 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:33.433217 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:33.835821 1187425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 04:27:33.901341 1187425 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:27:33.901393 1187425 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:27:33.901417 1187425 retry.go:31] will retry after 15.074885047s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:27:33.933523 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:33.933593 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:33.933865 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:27:33.933909 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:27:34.433650 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:34.433727 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:34.434057 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:34.933020 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:34.933101 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:34.933420 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:35.433095 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:35.433165 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:35.433445 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:35.933243 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:35.933325 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:35.933633 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:36.433407 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:36.433483 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:36.433826 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:27:36.433882 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:27:36.933227 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:36.933299 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:36.933563 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:37.433288 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:37.433419 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:37.433790 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:37.933592 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:37.933667 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:37.934021 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:38.432659 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:38.432729 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:38.433014 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:38.932721 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:38.932798 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:38.933137 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:27:38.933190 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:27:39.432858 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:39.432933 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:39.433235 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:39.932589 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:39.932669 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:39.932951 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:40.432701 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:40.432786 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:40.433116 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:40.932716 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:40.932797 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:40.933091 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:41.432779 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:41.432846 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:41.433142 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:27:41.433204 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:27:41.932681 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:41.932757 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:41.933101 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:42.432844 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:42.432919 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:42.433290 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:42.932967 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:42.933038 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:42.933352 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:43.432697 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:43.432812 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:43.433136 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:43.933033 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:43.933129 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:43.933472 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:27:43.933526 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:27:44.433250 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:44.433328 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:44.433660 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:44.933653 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:44.933724 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:44.934068 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:45.432640 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:45.432721 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:45.433020 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:45.932669 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:45.932752 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:45.933159 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:46.350898 1187425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:27:46.406595 1187425 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:27:46.409949 1187425 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:27:46.409981 1187425 retry.go:31] will retry after 30.377142014s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:27:46.433127 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:46.433197 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:46.433514 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:27:46.433571 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:27:46.933101 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:46.933177 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:46.933501 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:47.433170 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:47.433241 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:47.433507 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:47.932770 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:47.932843 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:47.933174 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:48.432886 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:48.432966 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:48.433255 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:48.932660 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:48.932727 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:48.933049 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:27:48.933100 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:27:48.977251 1187425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 04:27:49.036457 1187425 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:27:49.036497 1187425 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:27:49.036517 1187425 retry.go:31] will retry after 20.293703248s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:27:49.432687 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:49.432763 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:49.433127 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:49.932861 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:49.932933 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:49.933269 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:50.433588 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:50.433662 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:50.433924 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:50.932670 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:50.932744 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:50.933080 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:27:50.933141 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:27:51.432801 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:51.432888 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:51.433180 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:51.932877 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:51.932959 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:51.933270 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:52.432704 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:52.432782 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:52.433138 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:52.932700 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:52.932780 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:52.933082 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:53.432653 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:53.432725 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:53.433037 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:27:53.433089 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:27:53.932975 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:53.933048 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:53.933385 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:54.432710 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:54.432790 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:54.433145 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:54.932877 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:54.932952 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:54.933240 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:55.432707 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:55.432782 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:55.433125 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:27:55.433191 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:27:55.932861 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:55.932943 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:55.933270 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:56.432680 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:56.432756 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:56.433029 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:56.932719 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:56.932801 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:56.933134 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:57.432697 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:57.432773 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:57.433096 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:57.932659 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:57.932729 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:57.933033 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:27:57.933082 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:27:58.432760 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:58.432832 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:58.433186 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:58.932893 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:58.932974 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:58.933286 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:59.432662 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:59.432732 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:59.433040 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:59.932639 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:59.932712 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:59.933039 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:00.432720 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:00.432811 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:00.433208 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:28:00.433277 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:28:00.932672 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:00.932741 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:00.933005 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:01.432725 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:01.432807 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:01.433146 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:01.932896 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:01.932975 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:01.933314 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:02.432655 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:02.432728 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:02.433016 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:02.932700 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:02.932781 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:02.933135 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:28:02.933190 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:28:03.432857 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:03.432934 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:03.433286 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:03.933001 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:03.933068 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:03.933321 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:04.432726 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:04.432801 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:04.433094 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:04.932617 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:04.932698 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:04.933036 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:05.432707 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:05.432788 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:05.433060 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:28:05.433107 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:28:05.932743 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:05.932818 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:05.933156 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:06.432696 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:06.432777 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:06.433116 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:06.933451 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:06.933527 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:06.933789 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:07.433539 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:07.433615 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:07.433955 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:28:07.434011 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:28:07.933609 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:07.933684 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:07.934024 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:08.432650 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:08.432722 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:08.433067 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:08.932695 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:08.932767 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:08.933107 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:09.330698 1187425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 04:28:09.392626 1187425 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:28:09.392671 1187425 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:28:09.392765 1187425 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1209 04:28:09.432874 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:09.432952 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:09.433232 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:09.932653 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:09.932723 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:09.932991 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:28:09.933037 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:28:10.432663 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:10.432757 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:10.433041 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:10.932700 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:10.932793 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:10.933076 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:11.433216 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:11.433303 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:11.433575 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:11.933330 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:11.933412 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:11.933748 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:28:11.933801 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:28:12.433587 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:12.433670 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:12.434027 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:12.932705 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:12.932772 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:12.933018 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:13.432706 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:13.432787 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:13.433119 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:13.932999 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:13.933099 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:13.933392 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:14.432657 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:14.432736 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:14.433055 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:28:14.433109 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:28:14.932664 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:14.932748 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:14.933036 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:15.432674 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:15.432750 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:15.433098 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:15.932771 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:15.932842 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:15.933137 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:16.432714 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:16.432788 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:16.433087 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:28:16.433135 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:28:16.787371 1187425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:28:16.844461 1187425 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:28:16.844502 1187425 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:28:16.844590 1187425 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1209 04:28:16.849226 1187425 out.go:179] * Enabled addons: 
	I1209 04:28:16.852870 1187425 addons.go:530] duration metric: took 1m22.949297316s for enable addons: enabled=[]
	I1209 04:28:16.932633 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:16.932724 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:16.933045 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:17.432667 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:17.432732 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:17.433031 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:17.932701 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:17.932788 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:17.933067 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:18.432770 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:18.432843 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:18.433126 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:28:18.433178 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:28:18.932677 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:18.932752 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:18.932995 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:19.432687 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:19.432781 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:19.433100 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:19.932854 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:19.932926 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:19.933256 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:20.433039 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:20.433107 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:20.433386 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:28:20.433429 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:28:20.933212 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:20.933282 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:20.933581 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:21.433349 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:21.433421 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:21.433766 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:21.933219 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:21.933285 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:21.933576 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:22.433203 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:22.433273 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:22.433621 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:28:22.433676 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:28:22.933451 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:22.933536 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:22.933840 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:23.433217 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:23.433287 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:23.433546 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:23.933612 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:23.933689 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:23.934050 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:24.432755 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:24.432836 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:24.433161 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:24.932932 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:24.933008 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:24.933276 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:28:24.933327 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:28:25.432973 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:25.433049 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:25.433379 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:25.933099 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:25.933181 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:25.933530 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:26.433216 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:26.433283 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:26.433547 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:26.933320 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:26.933401 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:26.933762 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:28:26.933818 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:28:27.433589 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:27.433667 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:27.434004 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:27.932649 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:27.932724 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:27.933001 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:28.432685 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:28.432757 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:28.433490 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:28.933280 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:28.933359 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:28.933693 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:29.433205 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:29.433272 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:29.433545 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:28:29.433592 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:28:29.933575 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:29.933655 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:29.933979 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:30.432667 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:30.432747 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:30.433044 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:30.932681 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:30.932752 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:30.933046 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:31.432688 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:31.432771 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:31.433098 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:31.932806 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:31.932880 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:31.933203 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:28:31.933259 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:28:32.432774 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:32.432849 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:32.433097 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:32.932695 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:32.932765 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:32.933078 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:33.432694 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:33.432776 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:33.433090 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:33.932980 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:33.933051 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:33.933310 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:28:33.933359 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:28:34.432667 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:34.432745 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:34.433055 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:34.932949 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:34.933032 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:34.933356 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:35.433019 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:35.433096 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:35.433526 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:35.933390 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:35.933466 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:35.933812 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:28:35.933870 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:28:36.433595 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:36.433676 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:36.433996 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:36.932657 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:36.932727 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:36.933025 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:37.432697 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:37.432776 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:37.433068 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:37.932702 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:37.932780 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:37.933143 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:38.432646 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:38.432727 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:38.433055 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:28:38.433106 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:28:38.932743 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:38.932816 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:38.933130 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:39.432847 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:39.432919 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:39.433263 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:39.933046 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:39.933114 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:39.933379 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:40.432710 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:40.432783 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:40.433129 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:28:40.433184 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:28:40.932927 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:40.933008 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:40.933371 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:41.432689 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:41.432758 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:41.433014 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:41.932710 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:41.932795 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:41.933094 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:42.432689 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:42.432808 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:42.433149 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:28:42.433204 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:28:42.932862 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:42.932928 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:42.933226 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:43.432918 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:43.432995 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:43.433361 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:43.933127 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:43.933204 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:43.933534 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:44.433220 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:44.433305 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:44.433609 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:28:44.433661 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:28:44.933573 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:44.933652 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:44.933989 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:45.432671 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:45.432808 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:45.433150 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:45.932712 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:45.932784 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:45.933049 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:46.432736 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:46.432815 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:46.433149 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:46.932701 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:46.932779 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:46.933073 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:28:46.933121 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:28:47.432739 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:47.432826 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:47.433130 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:47.932695 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:47.932765 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:47.933076 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:48.432672 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:48.432745 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:48.433062 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:48.932639 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:48.932746 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:48.933042 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:49.432707 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:49.432782 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:49.433123 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:28:49.433177 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:28:49.932922 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:49.932995 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:49.933579 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:50.433185 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:50.433253 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:50.433517 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:50.933391 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:50.933468 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:50.933797 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:51.433551 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:51.433624 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:51.433934 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:28:51.433990 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:28:51.933180 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:51.933283 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:51.933542 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:52.433358 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:52.433437 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:52.433756 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:52.933478 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:52.933559 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:52.933900 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:53.433153 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:53.433229 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:53.433491 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:53.932707 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:53.932895 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:53.933271 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:28:53.933325 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:28:54.432706 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:54.432787 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:54.433082 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:54.933635 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:54.933745 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:54.934087 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:55.432697 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:55.432773 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:55.433110 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:55.932879 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:55.932954 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:55.933290 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:28:55.933359 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:28:56.432868 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:56.432941 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:56.433305 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:56.932702 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:56.932781 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:56.933131 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:57.432846 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:57.432925 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:57.433270 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:57.932659 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:57.932734 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:57.933027 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:58.432714 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:58.432792 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:58.433128 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:28:58.433197 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:28:58.932868 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:58.932944 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:58.933265 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:59.432668 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:59.432735 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:59.432989 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:59.932616 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:59.932707 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:59.933033 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:00.432720 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:00.432802 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:00.433159 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:29:00.433228 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:29:00.932660 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:00.932731 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:00.933053 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:01.432716 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:01.432794 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:01.433137 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:01.932702 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:01.932776 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:01.933098 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:02.432775 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:02.432843 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:02.433108 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:02.932791 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:02.932873 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:02.933214 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:29:02.933284 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:29:03.432715 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:03.432795 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:03.433113 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:03.933003 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:03.933076 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:03.933364 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:04.432671 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:04.432749 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:04.433066 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:04.932620 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:04.932694 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:04.933013 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:05.432723 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:05.432790 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:05.433120 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:29:05.433184 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:29:05.932842 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:05.932925 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:05.933228 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:06.432721 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:06.432798 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:06.433119 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:06.932673 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:06.932758 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:06.933065 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:07.432690 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:07.432761 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:07.433037 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:07.932687 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:07.932769 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:07.933108 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:29:07.933164 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:29:08.432792 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:08.432858 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:08.433117 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:08.932787 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:08.932863 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:08.933157 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:09.432693 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:09.432764 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:09.433055 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:09.932595 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:09.932672 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:09.932942 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:10.432649 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:10.432719 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:10.433035 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:29:10.433090 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:29:10.932796 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:10.932869 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:10.933200 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:11.432701 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:11.432802 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:11.433137 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:11.932772 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:11.932869 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:11.933219 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:12.432724 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:12.432804 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:12.433120 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:29:12.433175 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:29:12.932648 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:12.932732 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:12.933021 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:13.432608 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:13.432696 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:13.432999 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:13.932923 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:13.932996 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:13.933301 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:14.433005 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:14.433076 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:14.433349 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:29:14.433392 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:29:14.933312 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:14.933390 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:14.933705 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:15.433476 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:15.433554 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:15.433865 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:15.933207 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:15.933288 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:15.933572 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:16.433400 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:16.433476 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:16.433794 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:29:16.433849 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:29:16.933242 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:16.933322 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:16.933648 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:17.433213 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:17.433292 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:17.433548 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:17.933339 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:17.933416 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:17.933707 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:18.433434 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:18.433516 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:18.433853 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:29:18.433907 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:29:18.933184 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:18.933260 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:18.933504 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:19.433298 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:19.433371 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:19.433705 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:19.933618 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:19.933716 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:19.934086 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:20.432647 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:20.432722 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:20.433052 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:20.932730 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:20.932801 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:20.933102 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:29:20.933155 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:29:21.432687 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:21.432769 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:21.433095 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:21.932654 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:21.932755 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:21.933080 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:22.432734 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:22.432823 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:22.433185 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:22.932923 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:22.933014 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:22.933448 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:29:22.933504 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:29:23.433275 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:23.433350 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:23.433652 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:23.933627 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:23.933712 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:23.934033 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:24.432724 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:24.432802 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:24.433135 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:24.932905 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:24.932975 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:24.933297 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:25.432701 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:25.432775 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:25.433100 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:29:25.433159 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:29:25.932854 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:25.932931 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:25.933286 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:26.432982 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:26.433053 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:26.433514 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:26.933295 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:26.933368 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:26.933684 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:27.433488 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:27.433566 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:27.433940 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:29:27.434009 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:29:27.932662 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:27.932734 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:27.933007 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:28.432710 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:28.432783 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:28.433097 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:28.932665 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:28.932744 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:28.933074 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:29.432741 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:29.432816 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:29.433060 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:29.932619 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:29.932701 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:29.933015 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:29:29.933073 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:29:30.432700 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:30.432780 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:30.433106 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:30.932656 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:30.932728 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:30.932982 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:31.432616 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:31.432689 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:31.433009 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:31.932733 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:31.932812 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:31.933149 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:29:31.933201 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:29:32.432841 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:32.432914 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:32.433166 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:32.932705 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:32.932783 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:32.933123 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:33.432882 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:33.432957 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:33.433297 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:33.933053 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:33.933130 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:33.933467 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:29:33.933520 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:29:34.433286 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:34.433403 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:34.433746 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:34.932612 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:34.932683 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:34.933012 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:35.433252 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:35.433331 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:35.433606 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:35.933376 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:35.933452 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:35.933778 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:29:35.933826 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:29:36.433423 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:36.433498 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:36.433798 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:36.933229 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:36.933302 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:36.933556 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:37.433365 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:37.433445 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:37.433756 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:37.933531 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:37.933605 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:37.933936 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:29:37.933989 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:29:38.433192 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:38.433264 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:38.433514 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:38.933271 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:38.933344 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:38.933634 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:39.433290 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:39.433366 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:39.433709 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:39.933513 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:39.933582 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:39.933833 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:40.433616 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:40.433692 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:40.433987 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:29:40.434034 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:29:40.933257 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:40.933329 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:40.933667 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:41.433181 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:41.433267 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:41.433577 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:41.933367 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:41.933449 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:41.933797 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:42.433613 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:42.433687 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:42.434049 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:29:42.434129 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:29:42.932643 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:42.932715 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:42.932992 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:43.432683 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:43.432763 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:43.433076 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:43.933013 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:43.933091 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:43.933427 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:44.433227 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:44.433298 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:44.433550 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:44.933559 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:44.933633 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:44.933965 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:29:44.934020 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:29:45.432683 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:45.432761 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:45.433112 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:45.932794 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:45.932869 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:45.933154 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:46.432857 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:46.432932 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:46.433290 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:46.932721 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:46.932795 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:46.933140 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:47.432683 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:47.432767 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:47.433040 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:29:47.433081 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:29:47.932708 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:47.932784 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:47.933102 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:48.432814 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:48.432891 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:48.433180 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:48.932663 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:48.932737 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:48.932981 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:49.432666 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:49.432752 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:49.433069 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:29:49.433125 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:29:49.933001 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:49.933077 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:49.933427 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:50.433227 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:50.433297 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:50.433549 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:50.933296 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:50.933377 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:50.933694 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:51.433469 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:51.433544 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:51.433881 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:29:51.433935 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:29:51.933202 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:51.933269 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:51.933538 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:52.433290 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:52.433364 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:52.433675 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:52.933484 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:52.933559 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:52.933885 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:53.433239 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:53.433315 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:53.433579 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:53.933658 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:53.933737 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:53.934052 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:29:53.934108 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:29:54.432705 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:54.432789 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:54.433124 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:54.932891 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:54.932964 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:54.933236 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:55.432891 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:55.432964 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:55.433304 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:55.932875 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:55.932972 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:55.933352 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:56.433012 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:56.433094 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:56.433401 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:29:56.433443 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:29:56.932693 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:56.932777 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:56.933108 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:57.432826 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:57.432902 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:57.433221 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:57.932682 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:57.932755 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:57.933028 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:58.432718 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:58.432793 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:58.433140 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:58.932716 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:58.932793 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:58.933105 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:29:58.933168 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:29:59.432822 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:59.432892 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:59.433194 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:59.933078 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:59.933150 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:59.933487 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:00.435081 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:00.435162 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:00.435476 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:00.933357 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:00.933452 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:00.933844 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:30:00.933904 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:30:01.433579 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:01.433688 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:01.434089 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:01.932814 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:01.932889 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:01.933149 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:02.432696 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:02.432770 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:02.433104 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:02.932820 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:02.932900 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:02.933272 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:03.432924 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:03.433018 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:03.433394 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:30:03.433446 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:30:03.933058 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:03.933138 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:03.933450 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:04.433250 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:04.433365 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:04.433699 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:04.933501 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:04.933567 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:04.933823 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:05.433624 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:05.433703 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:05.434043 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:30:05.434098 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:30:05.932702 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:05.932780 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:05.933132 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:06.432812 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:06.432885 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:06.433207 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:06.932725 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:06.932803 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:06.933164 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:07.432877 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:07.432965 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:07.433368 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:07.932664 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:07.932739 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:07.933045 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:30:07.933095 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:30:08.432746 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:08.432832 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:08.433233 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:08.932787 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:08.932862 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:08.933220 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:09.432722 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:09.432792 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:09.433075 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:09.932940 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:09.933018 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:09.933383 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:30:09.933446 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:30:10.432697 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:10.432775 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:10.433080 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:10.932767 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:10.932837 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:10.933122 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:11.432711 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:11.432785 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:11.433139 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:11.932864 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:11.932943 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:11.933292 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:12.432972 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:12.433050 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:12.433319 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:30:12.433362 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:30:12.932693 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:12.932770 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:12.933130 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:13.432817 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:13.432891 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:13.433211 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:13.932954 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:13.933023 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:13.933298 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:14.432961 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:14.433039 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:14.433383 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:30:14.433439 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:30:14.933212 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:14.933286 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:14.933615 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:15.433214 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:15.433283 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:15.433537 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:15.933372 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:15.933448 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:15.933750 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:16.433525 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:16.433604 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:16.433977 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:30:16.434106 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:30:16.932772 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:16.932839 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:16.933100 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:17.432712 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:17.432793 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:17.433089 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:17.932769 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:17.932849 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:17.933173 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:18.432930 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:18.432998 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:18.433257 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:18.932950 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:18.933025 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:18.933372 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:30:18.933434 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:30:19.432919 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:19.433009 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:19.433344 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:19.933155 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:19.933227 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:19.933491 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:20.433362 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:20.433448 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:20.433795 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:20.933260 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:20.933344 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:20.933670 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:30:20.933726 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:30:21.433173 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:21.433246 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:21.433511 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:21.933299 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:21.933379 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:21.933716 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:22.433492 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:22.433570 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:22.433867 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:22.933284 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:22.933366 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:22.933654 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:23.433368 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:23.433438 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:23.433760 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:30:23.433812 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:30:23.933593 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:23.933675 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:23.933994 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:24.432661 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:24.432729 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:24.432981 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:24.932865 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:24.932938 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:24.933361 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:25.432685 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:25.432763 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:25.433120 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:25.932767 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:25.932839 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:25.933140 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:30:25.933197 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:30:26.432718 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:26.432796 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:26.433197 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:26.932889 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:26.932990 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:26.933317 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:27.432657 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:27.432727 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:27.433032 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:27.932724 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:27.932798 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:27.933135 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:28.432696 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:28.432772 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:28.433073 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:30:28.433121 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:30:28.932660 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:28.932735 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:28.933045 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:29.432714 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:29.432786 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:29.433158 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:29.932918 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:29.933000 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:29.933354 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:30.432667 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:30.432740 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:30.433039 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:30.932757 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:30.932838 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:30.933183 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:30:30.933239 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:30:31.432899 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:31.432979 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:31.433354 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:31.933050 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:31.933119 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:31.933461 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:32.433235 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:32.433315 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:32.433644 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:32.933442 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:32.933524 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:32.933825 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:30:32.933872 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:30:33.433233 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:33.433304 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:33.433591 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:33.933547 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:33.933627 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:33.933938 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:34.432679 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:34.432763 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:34.433108 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:34.933586 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:34.933660 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:34.933905 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:30:34.933945 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:30:35.432656 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:35.432733 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:35.433079 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:35.932801 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:35.932887 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:35.933268 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:36.432736 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:36.432805 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:36.433059 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:36.932731 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:36.932806 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:36.933156 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:37.432867 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:37.432942 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:37.433311 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:30:37.433368 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:30:37.932650 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:37.932720 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:37.932998 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:38.432684 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:38.432763 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:38.433055 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:38.932741 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:38.932818 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:38.933136 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:39.432679 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:39.432748 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:39.433040 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:39.932790 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:39.932869 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:39.933219 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:30:39.933279 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:30:40.432703 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:40.432777 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:40.433111 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:40.932641 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:40.932707 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:40.932957 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:41.432666 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:41.432744 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:41.433069 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:41.932847 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:41.932929 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:41.933224 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:42.432889 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:42.432958 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:42.433265 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:30:42.433309 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:30:42.932708 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:42.932789 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:42.933126 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:43.432820 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:43.432902 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:43.433230 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:43.933144 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:43.933213 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:43.933465 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:44.433223 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:44.433300 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:44.433652 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:30:44.433704 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:30:44.933589 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:44.933670 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:44.934005 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:45.432690 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:45.432762 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:45.433007 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:45.932747 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:45.932822 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:45.933163 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:46.432880 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:46.432953 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:46.433265 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:46.932662 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:46.932736 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:46.933048 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:30:46.933099 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:30:47.432707 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:47.432797 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:47.433190 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:47.932887 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:47.932971 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:47.933316 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:48.432720 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:48.432792 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:48.433100 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:48.932688 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:48.932768 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:48.933088 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:30:48.933148 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:30:49.432733 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:49.432809 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:49.433125 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:49.933000 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:49.933071 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:49.933338 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:50.433013 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:50.433086 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:50.433573 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:50.933345 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:50.933421 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:50.933709 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:30:50.933750 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:30:51.433232 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:51.433307 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:51.433630 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:51.933396 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:51.933477 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:51.933822 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:52.433445 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:52.433526 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:52.433848 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:52.933226 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:52.933298 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:52.933562 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:53.433320 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:53.433394 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:53.433724 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:30:53.433778 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:30:53.932930 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:53.933016 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:53.933473 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:54.433004 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:54.433155 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:54.433480 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:54.933346 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:54.933427 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:54.933751 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:55.433491 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:55.433571 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:55.433940 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:30:55.434008 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:30:55.933242 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:55.933327 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:55.933662 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:56.433447 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:56.433527 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:56.433865 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:56.933651 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:56.933744 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:56.934082 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:57.432792 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:57.432864 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:57.433162 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:57.932683 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:57.932753 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:57.933114 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:30:57.933173 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:30:58.432860 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:58.432937 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:58.433264 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:58.932676 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:58.932748 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:58.932997 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:59.432729 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:59.432808 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:59.433150 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:59.933081 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:59.933159 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:59.933480 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:30:59.933530 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:31:00.433233 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:00.433315 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:00.433580 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:00.933316 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:00.933394 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:00.933727 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:01.433533 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:01.433611 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:01.433948 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:01.933228 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:01.933301 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:01.933558 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:31:01.933611 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:31:02.433377 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:02.433451 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:02.433800 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:02.933601 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:02.933680 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:02.933967 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:03.432648 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:03.432726 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:03.432986 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:03.932952 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:03.933038 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:03.933395 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:04.433141 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:04.433218 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:04.433526 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:31:04.433581 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:31:04.933489 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:04.933558 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:04.933807 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:05.433605 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:05.433678 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:05.434011 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:05.932712 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:05.932791 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:05.933127 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:06.432820 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:06.432900 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:06.433220 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:06.932716 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:06.932796 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:06.933135 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:31:06.933230 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:31:07.432675 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:07.432749 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:07.433059 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:07.932657 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:07.932734 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:07.933058 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:08.432730 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:08.432806 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:08.433103 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:08.932714 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:08.932789 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:08.933128 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:09.432655 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:09.432733 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:09.432994 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:31:09.433050 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:31:09.932911 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:09.932991 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:09.933336 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:10.432707 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:10.432787 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:10.433102 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:10.932665 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:10.932738 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:10.933044 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:11.432740 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:11.432823 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:11.433154 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:31:11.433214 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:31:11.932885 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:11.932968 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:11.933325 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:12.432666 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:12.432738 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:12.433048 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:12.932727 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:12.932804 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:12.933136 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:13.432857 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:13.432936 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:13.433268 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:31:13.433318 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:31:13.933237 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:13.933317 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:13.933599 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:14.433349 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:14.433424 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:14.433772 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:14.933659 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:14.933736 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:14.934065 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:15.432670 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:15.432747 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:15.433015 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:15.932715 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:15.932801 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:15.933127 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:31:15.933183 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:31:16.432689 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:16.432770 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:16.433108 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:16.932805 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:16.932881 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:16.933165 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:17.432843 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:17.432921 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:17.433248 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:17.932974 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:17.933055 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:17.933357 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:31:17.933406 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:31:18.432810 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:18.432881 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:18.433142 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:18.932706 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:18.932778 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:18.933130 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:19.432699 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:19.432777 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:19.433122 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:19.932861 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:19.932936 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:19.933225 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:20.432912 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:20.432990 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:20.433312 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:31:20.433361 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:31:20.933001 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:20.933084 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:20.933413 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:21.432656 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:21.432769 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:21.433112 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:21.932708 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:21.932788 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:21.933119 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:22.432682 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:22.432760 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:22.433128 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:22.932684 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:22.932752 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:22.932998 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:31:22.933038 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:31:23.432679 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:23.432761 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:23.433116 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:23.932894 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:23.932973 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:23.933311 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:24.432655 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:24.432728 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:24.432998 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:24.932905 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:24.932983 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:24.933347 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:31:24.933403 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:31:25.432684 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:25.432758 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:25.433091 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:25.932662 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:25.932737 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:25.933053 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:26.432697 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:26.432782 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:26.433111 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:26.932702 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:26.932782 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:26.933089 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:27.432642 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:27.432721 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:27.432985 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:31:27.433025 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:31:27.932736 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:27.932813 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:27.933163 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:28.432736 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:28.432812 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:28.433107 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:28.932662 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:28.932730 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:28.933022 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:29.432735 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:29.432813 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:29.433101 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:31:29.433149 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:31:29.932650 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:29.932724 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:29.933059 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:30.432740 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:30.432807 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:30.433108 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:30.932710 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:30.932784 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:30.933148 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:31.432857 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:31.432937 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:31.433271 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:31:31.433325 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:31:31.932657 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:31.932730 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:31.933052 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:32.432705 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:32.432797 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:32.433154 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:32.932867 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:32.932945 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:32.933298 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:33.433002 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:33.433120 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:33.433453 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:31:33.433504 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:31:33.933313 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:33.933388 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:33.933720 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:34.432992 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:34.433115 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:34.433477 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:34.933299 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:34.933372 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:34.933678 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:35.433472 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:35.433550 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:35.433863 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:31:35.433925 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:31:35.932642 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:35.932726 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:35.933082 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:36.432718 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:36.432804 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:36.433204 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:36.932932 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:36.933006 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:36.933324 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:37.432701 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:37.432781 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:37.433098 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:37.932655 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:37.932744 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:37.933007 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:31:37.933067 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:31:38.432684 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:38.432762 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:38.433096 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:38.932741 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:38.932818 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:38.933151 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:39.432752 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:39.432821 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:39.433106 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:39.933656 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:39.933728 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:39.933989 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:31:39.934033 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:31:40.432680 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:40.432765 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:40.433112 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:40.932669 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:40.932734 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:40.933053 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:41.432690 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:41.432780 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:41.433200 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:41.932877 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:41.932953 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:41.933290 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:42.432904 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:42.432982 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:42.433302 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:31:42.433355 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:31:42.932706 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:42.932798 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:42.933087 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:43.432700 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:43.432776 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:43.433120 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:43.933050 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:43.933118 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:43.933424 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:44.432712 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:44.432790 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:44.433076 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:44.932987 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:44.933069 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:44.933451 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:31:44.933508 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:31:45.432652 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:45.432721 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:45.433020 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:45.932767 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:45.932842 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:45.933175 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:46.432686 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:46.432759 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:46.433102 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:46.932655 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:46.932727 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:46.933006 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:47.432684 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:47.432764 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:47.433090 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:31:47.433151 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:31:47.932730 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:47.932804 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:47.933206 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:48.432734 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:48.432808 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:48.433081 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:48.932704 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:48.932788 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:48.933086 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:49.432676 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:49.432758 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:49.433091 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:49.932841 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:49.932922 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:49.933214 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:31:49.933265 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:31:50.432697 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:50.432774 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:50.433083 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:50.932736 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:50.932814 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:50.933144 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:51.432737 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:51.432809 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:51.433098 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:51.932682 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:51.932765 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:51.933115 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:52.432819 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:52.432896 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:52.433241 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:31:52.433300 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:31:52.932671 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:52.932743 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:52.933011 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:53.432692 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:53.432763 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:53.433098 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:53.933090 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:53.933164 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:53.933488 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:54.432669 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:54.432748 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:54.433065 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:54.932936 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:54.933012 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:54.933364 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:31:54.933419 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:31:55.433079 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:55.433151 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:55.433486 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:55.933226 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:55.933296 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:55.933560 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:56.433428 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:56.433505 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:56.433878 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:56.932635 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:56.932709 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:56.933027 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:57.432658 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:57.432736 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:57.433044 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:31:57.433099 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:31:57.932711 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:57.932795 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:57.933103 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:58.432714 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:58.432787 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:58.433121 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:58.932651 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:58.932719 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:58.932975 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:59.432680 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:59.432759 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:59.433101 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:31:59.433157 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:31:59.932861 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:59.932939 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:59.933269 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:00.432699 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:00.432786 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:00.433188 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:00.932718 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:00.932793 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:00.933119 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:01.432702 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:01.432778 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:01.433132 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:32:01.433188 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:32:01.932953 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:01.933059 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:01.933405 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:02.433066 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:02.433138 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:02.433476 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:02.933290 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:02.933362 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:02.933678 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:03.433228 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:03.433307 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:03.433557 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:32:03.433604 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:32:03.933531 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:03.933606 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:03.933926 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:04.432634 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:04.432709 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:04.433045 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:04.932774 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:04.932840 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:04.933129 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:05.432832 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:05.432907 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:05.433248 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:05.932726 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:05.932801 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:05.933145 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:32:05.933201 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:32:06.432676 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:06.432754 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:06.433037 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:06.932744 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:06.932823 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:06.933214 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:07.432896 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:07.432968 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:07.433319 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:07.933026 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:07.933110 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:07.933393 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:32:07.933441 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:32:08.432735 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:08.432817 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:08.433284 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:08.932871 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:08.932978 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:08.933325 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:09.432676 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:09.432743 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:09.432980 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:09.932851 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:09.932929 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:09.933264 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:10.432726 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:10.432817 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:10.433167 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:32:10.433217 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:32:10.932660 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:10.932732 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:10.933027 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:11.432714 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:11.432790 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:11.433154 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:11.932874 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:11.932955 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:11.933284 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:12.432656 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:12.432727 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:12.432974 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:12.932659 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:12.932737 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:12.933062 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:32:12.933115 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:32:13.432673 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:13.432745 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:13.433062 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:13.932942 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:13.933022 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:13.933305 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:14.432666 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:14.432737 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:14.433054 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:14.932628 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:14.932702 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:14.933027 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:15.432739 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:15.432819 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:15.433087 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:32:15.433132 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:32:15.932808 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:15.932886 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:15.933232 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:16.432939 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:16.433082 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:16.433415 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:16.933224 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:16.933297 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:16.933611 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:17.433392 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:17.433466 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:17.433806 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:32:17.433861 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:32:17.933475 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:17.933557 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:17.933868 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:18.433222 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:18.433292 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:18.433591 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:18.933254 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:18.933331 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:18.933670 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:19.433471 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:19.433558 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:19.433901 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:32:19.433958 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:32:19.932590 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:19.932659 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:19.932906 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:20.432666 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:20.432742 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:20.433050 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:20.932683 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:20.932763 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:20.933127 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:21.432653 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:21.432736 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:21.433076 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:21.932743 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:21.932826 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:21.933126 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:32:21.933180 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:32:22.432753 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:22.432822 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:22.433257 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:22.932652 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:22.932719 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:22.933033 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:23.432711 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:23.432784 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:23.433133 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:23.933069 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:23.933144 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:23.933485 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:32:23.933542 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:32:24.433150 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:24.433216 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:24.433549 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:24.933587 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:24.933667 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:24.933983 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:25.432703 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:25.432780 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:25.433147 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:25.932823 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:25.932900 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:25.933226 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:26.432706 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:26.432784 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:26.433129 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:32:26.433184 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:32:26.932670 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:26.932744 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:26.933089 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:27.432659 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:27.432738 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:27.433018 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:27.932725 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:27.932802 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:27.933150 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:28.432726 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:28.432802 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:28.433119 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:28.932673 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:28.932744 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:28.933005 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:32:28.933048 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:32:29.432681 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:29.432758 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:29.433106 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:29.932862 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:29.932935 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:29.933263 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:30.432649 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:30.432724 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:30.433048 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:30.932738 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:30.932809 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:30.933151 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:32:30.933209 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:32:31.432726 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:31.432804 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:31.433153 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:31.932709 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:31.932785 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:31.933093 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:32.432864 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:32.432944 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:32.433316 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:32.932716 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:32.932791 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:32.933128 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:33.432801 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:33.432867 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:33.433124 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:32:33.433164 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:32:33.932966 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:33.933040 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:33.933351 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:34.432696 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:34.432771 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:34.433125 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:34.932830 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:34.932901 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:34.933235 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:35.432934 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:35.433024 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:35.433448 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:32:35.433504 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:32:35.933268 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:35.933342 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:35.933709 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:36.433228 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:36.433294 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:36.433588 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:36.933406 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:36.933485 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:36.933802 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:37.433562 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:37.433642 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:37.433939 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:32:37.433983 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:32:37.933183 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:37.933254 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:37.933510 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:38.433295 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:38.433365 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:38.433691 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:38.933541 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:38.933625 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:38.933982 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:39.432665 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:39.432740 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:39.432999 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:39.932625 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:39.932702 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:39.933037 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:32:39.933088 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:32:40.432600 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:40.432680 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:40.432996 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:40.932646 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:40.932715 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:40.933018 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:41.432713 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:41.432789 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:41.433153 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:41.932729 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:41.932806 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:41.933137 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:32:41.933194 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:32:42.432656 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:42.432729 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:42.433054 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:42.932710 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:42.932792 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:42.933135 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:43.432827 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:43.432907 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:43.433251 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:43.932972 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:43.933046 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:43.933297 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:32:43.933337 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:32:44.433057 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:44.433132 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:44.433467 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:44.933349 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:44.933425 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:44.933760 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:45.433200 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:45.433271 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:45.433522 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:45.933329 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:45.933403 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:45.933719 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:32:45.933777 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:32:46.433543 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:46.433636 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:46.433947 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:46.933240 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:46.933306 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:46.933602 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:47.433389 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:47.433467 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:47.433758 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:47.933580 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:47.933664 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:47.934006 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:32:47.934069 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:32:48.432643 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:48.432717 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:48.432979 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:48.932681 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:48.932755 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:48.933070 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:49.432675 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:49.432745 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:49.433131 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:49.933137 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:49.933206 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:49.933501 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:50.433255 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:50.433323 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:50.433610 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:32:50.433656 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:32:50.933294 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:50.933368 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:50.933657 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:51.433200 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:51.433282 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:51.433542 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:51.933307 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:51.933394 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:51.933715 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:52.433471 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:52.433553 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:52.433873 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:32:52.433936 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:32:52.933235 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:52.933316 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:52.933571 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:53.433369 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:53.433448 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:53.433777 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:53.933615 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:53.933693 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:53.934065 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:54.432659 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:54.432727 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:54.433303 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:54.933397 1187425 type.go:168] "Request Body" body=""
	W1209 04:32:54.933475 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): client rate limiter Wait returned an error: context deadline exceeded
	I1209 04:32:54.933495 1187425 node_ready.go:38] duration metric: took 6m0.001016343s for node "functional-667319" to be "Ready" ...
	I1209 04:32:54.936503 1187425 out.go:203] 
	W1209 04:32:54.939246 1187425 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1209 04:32:54.939264 1187425 out.go:285] * 
	* 
	W1209 04:32:54.941401 1187425 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 04:32:54.944197 1187425 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:676: failed to soft start minikube. args "out/minikube-linux-arm64 start -p functional-667319 --alsologtostderr -v=8": exit status 80
functional_test.go:678: soft start took 6m5.665179477s for "functional-667319" cluster.
I1209 04:32:55.525126 1144231 config.go:182] Loaded profile config "functional-667319": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-667319
helpers_test.go:243: (dbg) docker inspect functional-667319:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e5b6511799c8d5c445a335a3bd5cc9a61b518fc27ac93dad8800da366ef32129",
	        "Created": "2025-12-09T04:18:34.060957311Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1182075,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-09T04:18:34.126944158Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:e4eb91ed18a24161fce60c7cdd660144ecd5b8c5029dc2dea2c5e423c2f48ce4",
	        "ResolvConfPath": "/var/lib/docker/containers/e5b6511799c8d5c445a335a3bd5cc9a61b518fc27ac93dad8800da366ef32129/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e5b6511799c8d5c445a335a3bd5cc9a61b518fc27ac93dad8800da366ef32129/hostname",
	        "HostsPath": "/var/lib/docker/containers/e5b6511799c8d5c445a335a3bd5cc9a61b518fc27ac93dad8800da366ef32129/hosts",
	        "LogPath": "/var/lib/docker/containers/e5b6511799c8d5c445a335a3bd5cc9a61b518fc27ac93dad8800da366ef32129/e5b6511799c8d5c445a335a3bd5cc9a61b518fc27ac93dad8800da366ef32129-json.log",
	        "Name": "/functional-667319",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-667319:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-667319",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e5b6511799c8d5c445a335a3bd5cc9a61b518fc27ac93dad8800da366ef32129",
	                "LowerDir": "/var/lib/docker/overlay2/b0239006282b6e4609a1f554d0a3fb94c749a13505795c8e4078cb2db194e8e0-init/diff:/var/lib/docker/overlay2/c44bb57aa59cc265266f37f2bb6e7ec0e7d641c3b4aeaa57e6d23deec6f0d1d4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b0239006282b6e4609a1f554d0a3fb94c749a13505795c8e4078cb2db194e8e0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b0239006282b6e4609a1f554d0a3fb94c749a13505795c8e4078cb2db194e8e0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b0239006282b6e4609a1f554d0a3fb94c749a13505795c8e4078cb2db194e8e0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-667319",
	                "Source": "/var/lib/docker/volumes/functional-667319/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-667319",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-667319",
	                "name.minikube.sigs.k8s.io": "functional-667319",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7c81dabcd9e57af9bce0bc0f5619f6ef3a27af43f4b649283a5bd778ab256415",
	            "SandboxKey": "/var/run/docker/netns/7c81dabcd9e5",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33900"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33901"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33904"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33902"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33903"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-667319": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "fe:40:bd:46:56:d8",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "88b3a65de70c15005c532a44219284d4df94e474ca5b78b04514c2f932b03beb",
	                    "EndpointID": "bdef7b156f4a28c1f641ae70b42db2750bb810ae6fe93fd65325e62eb232fe91",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-667319",
	                        "e5b6511799c8"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-667319 -n functional-667319
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-667319 -n functional-667319: exit status 2 (328.091288ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-667319 logs -n 25
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                              ARGS                                                                               │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-717497 ssh sudo cat /etc/ssl/certs/11442312.pem                                                                                                      │ functional-717497 │ jenkins │ v1.37.0 │ 09 Dec 25 04:18 UTC │ 09 Dec 25 04:18 UTC │
	│ image          │ functional-717497 image load --daemon kicbase/echo-server:functional-717497 --alsologtostderr                                                                   │ functional-717497 │ jenkins │ v1.37.0 │ 09 Dec 25 04:18 UTC │ 09 Dec 25 04:18 UTC │
	│ ssh            │ functional-717497 ssh sudo cat /usr/share/ca-certificates/11442312.pem                                                                                          │ functional-717497 │ jenkins │ v1.37.0 │ 09 Dec 25 04:18 UTC │ 09 Dec 25 04:18 UTC │
	│ ssh            │ functional-717497 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                        │ functional-717497 │ jenkins │ v1.37.0 │ 09 Dec 25 04:18 UTC │ 09 Dec 25 04:18 UTC │
	│ image          │ functional-717497 image ls                                                                                                                                      │ functional-717497 │ jenkins │ v1.37.0 │ 09 Dec 25 04:18 UTC │ 09 Dec 25 04:18 UTC │
	│ ssh            │ functional-717497 ssh sudo cat /etc/test/nested/copy/1144231/hosts                                                                                              │ functional-717497 │ jenkins │ v1.37.0 │ 09 Dec 25 04:18 UTC │ 09 Dec 25 04:18 UTC │
	│ image          │ functional-717497 image save kicbase/echo-server:functional-717497 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr │ functional-717497 │ jenkins │ v1.37.0 │ 09 Dec 25 04:18 UTC │ 09 Dec 25 04:18 UTC │
	│ image          │ functional-717497 image rm kicbase/echo-server:functional-717497 --alsologtostderr                                                                              │ functional-717497 │ jenkins │ v1.37.0 │ 09 Dec 25 04:18 UTC │ 09 Dec 25 04:18 UTC │
	│ image          │ functional-717497 image ls                                                                                                                                      │ functional-717497 │ jenkins │ v1.37.0 │ 09 Dec 25 04:18 UTC │ 09 Dec 25 04:18 UTC │
	│ image          │ functional-717497 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr                                       │ functional-717497 │ jenkins │ v1.37.0 │ 09 Dec 25 04:18 UTC │ 09 Dec 25 04:18 UTC │
	│ image          │ functional-717497 image ls                                                                                                                                      │ functional-717497 │ jenkins │ v1.37.0 │ 09 Dec 25 04:18 UTC │ 09 Dec 25 04:18 UTC │
	│ update-context │ functional-717497 update-context --alsologtostderr -v=2                                                                                                         │ functional-717497 │ jenkins │ v1.37.0 │ 09 Dec 25 04:18 UTC │ 09 Dec 25 04:18 UTC │
	│ image          │ functional-717497 image save --daemon kicbase/echo-server:functional-717497 --alsologtostderr                                                                   │ functional-717497 │ jenkins │ v1.37.0 │ 09 Dec 25 04:18 UTC │ 09 Dec 25 04:18 UTC │
	│ update-context │ functional-717497 update-context --alsologtostderr -v=2                                                                                                         │ functional-717497 │ jenkins │ v1.37.0 │ 09 Dec 25 04:18 UTC │ 09 Dec 25 04:18 UTC │
	│ update-context │ functional-717497 update-context --alsologtostderr -v=2                                                                                                         │ functional-717497 │ jenkins │ v1.37.0 │ 09 Dec 25 04:18 UTC │ 09 Dec 25 04:18 UTC │
	│ image          │ functional-717497 image ls --format short --alsologtostderr                                                                                                     │ functional-717497 │ jenkins │ v1.37.0 │ 09 Dec 25 04:18 UTC │ 09 Dec 25 04:18 UTC │
	│ image          │ functional-717497 image ls --format yaml --alsologtostderr                                                                                                      │ functional-717497 │ jenkins │ v1.37.0 │ 09 Dec 25 04:18 UTC │ 09 Dec 25 04:18 UTC │
	│ ssh            │ functional-717497 ssh pgrep buildkitd                                                                                                                           │ functional-717497 │ jenkins │ v1.37.0 │ 09 Dec 25 04:18 UTC │                     │
	│ image          │ functional-717497 image ls --format json --alsologtostderr                                                                                                      │ functional-717497 │ jenkins │ v1.37.0 │ 09 Dec 25 04:18 UTC │ 09 Dec 25 04:18 UTC │
	│ image          │ functional-717497 image build -t localhost/my-image:functional-717497 testdata/build --alsologtostderr                                                          │ functional-717497 │ jenkins │ v1.37.0 │ 09 Dec 25 04:18 UTC │ 09 Dec 25 04:18 UTC │
	│ image          │ functional-717497 image ls --format table --alsologtostderr                                                                                                     │ functional-717497 │ jenkins │ v1.37.0 │ 09 Dec 25 04:18 UTC │ 09 Dec 25 04:18 UTC │
	│ image          │ functional-717497 image ls                                                                                                                                      │ functional-717497 │ jenkins │ v1.37.0 │ 09 Dec 25 04:18 UTC │ 09 Dec 25 04:18 UTC │
	│ delete         │ -p functional-717497                                                                                                                                            │ functional-717497 │ jenkins │ v1.37.0 │ 09 Dec 25 04:18 UTC │ 09 Dec 25 04:18 UTC │
	│ start          │ -p functional-667319 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0         │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:18 UTC │                     │
	│ start          │ -p functional-667319 --alsologtostderr -v=8                                                                                                                     │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:26 UTC │                     │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/09 04:26:49
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 04:26:49.901158 1187425 out.go:360] Setting OutFile to fd 1 ...
	I1209 04:26:49.901350 1187425 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 04:26:49.901380 1187425 out.go:374] Setting ErrFile to fd 2...
	I1209 04:26:49.901407 1187425 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 04:26:49.902126 1187425 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-1142328/.minikube/bin
	I1209 04:26:49.902570 1187425 out.go:368] Setting JSON to false
	I1209 04:26:49.903455 1187425 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":25733,"bootTime":1765228677,"procs":156,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1209 04:26:49.903532 1187425 start.go:143] virtualization:  
	I1209 04:26:49.907035 1187425 out.go:179] * [functional-667319] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1209 04:26:49.910766 1187425 out.go:179]   - MINIKUBE_LOCATION=22081
	I1209 04:26:49.910878 1187425 notify.go:221] Checking for updates...
	I1209 04:26:49.916570 1187425 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 04:26:49.919423 1187425 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22081-1142328/kubeconfig
	I1209 04:26:49.922184 1187425 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-1142328/.minikube
	I1209 04:26:49.924947 1187425 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1209 04:26:49.927723 1187425 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 04:26:49.930999 1187425 config.go:182] Loaded profile config "functional-667319": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1209 04:26:49.931139 1187425 driver.go:422] Setting default libvirt URI to qemu:///system
	I1209 04:26:49.958230 1187425 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1209 04:26:49.958344 1187425 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 04:26:50.018007 1187425 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-09 04:26:50.006695366 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1209 04:26:50.018130 1187425 docker.go:319] overlay module found
	I1209 04:26:50.021068 1187425 out.go:179] * Using the docker driver based on existing profile
	I1209 04:26:50.024068 1187425 start.go:309] selected driver: docker
	I1209 04:26:50.024096 1187425 start.go:927] validating driver "docker" against &{Name:functional-667319 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-667319 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disa
bleCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 04:26:50.024203 1187425 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 04:26:50.024322 1187425 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 04:26:50.086853 1187425 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-09 04:26:50.07716198 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1209 04:26:50.087299 1187425 cni.go:84] Creating CNI manager for ""
	I1209 04:26:50.087371 1187425 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1209 04:26:50.087429 1187425 start.go:353] cluster config:
	{Name:functional-667319 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-667319 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPa
th: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 04:26:50.090570 1187425 out.go:179] * Starting "functional-667319" primary control-plane node in "functional-667319" cluster
	I1209 04:26:50.093453 1187425 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1209 04:26:50.098431 1187425 out.go:179] * Pulling base image v0.0.48-1765184860-22066 ...
	I1209 04:26:50.101405 1187425 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1209 04:26:50.101471 1187425 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local docker daemon
	I1209 04:26:50.101485 1187425 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1209 04:26:50.101503 1187425 cache.go:65] Caching tarball of preloaded images
	I1209 04:26:50.101600 1187425 preload.go:238] Found /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1209 04:26:50.101616 1187425 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1209 04:26:50.101720 1187425 profile.go:143] Saving config to /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/config.json ...
	I1209 04:26:50.125607 1187425 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local docker daemon, skipping pull
	I1209 04:26:50.125633 1187425 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c exists in daemon, skipping load
	I1209 04:26:50.125648 1187425 cache.go:243] Successfully downloaded all kic artifacts
	I1209 04:26:50.125680 1187425 start.go:360] acquireMachinesLock for functional-667319: {Name:mk6c31f0747796f5f8ac8ea1653d6ee60fe2a47d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 04:26:50.125839 1187425 start.go:364] duration metric: took 130.318µs to acquireMachinesLock for "functional-667319"
	I1209 04:26:50.125869 1187425 start.go:96] Skipping create...Using existing machine configuration
	I1209 04:26:50.125878 1187425 fix.go:54] fixHost starting: 
	I1209 04:26:50.126147 1187425 cli_runner.go:164] Run: docker container inspect functional-667319 --format={{.State.Status}}
	I1209 04:26:50.147043 1187425 fix.go:112] recreateIfNeeded on functional-667319: state=Running err=<nil>
	W1209 04:26:50.147073 1187425 fix.go:138] unexpected machine state, will restart: <nil>
	I1209 04:26:50.150254 1187425 out.go:252] * Updating the running docker "functional-667319" container ...
	I1209 04:26:50.150291 1187425 machine.go:94] provisionDockerMachine start ...
	I1209 04:26:50.150379 1187425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-667319
	I1209 04:26:50.167513 1187425 main.go:143] libmachine: Using SSH client type: native
	I1209 04:26:50.167851 1187425 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 33900 <nil> <nil>}
	I1209 04:26:50.167868 1187425 main.go:143] libmachine: About to run SSH command:
	hostname
	I1209 04:26:50.327552 1187425 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-667319
	
	I1209 04:26:50.327578 1187425 ubuntu.go:182] provisioning hostname "functional-667319"
	I1209 04:26:50.327642 1187425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-667319
	I1209 04:26:50.345440 1187425 main.go:143] libmachine: Using SSH client type: native
	I1209 04:26:50.345757 1187425 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 33900 <nil> <nil>}
	I1209 04:26:50.345775 1187425 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-667319 && echo "functional-667319" | sudo tee /etc/hostname
	I1209 04:26:50.504917 1187425 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-667319
	
	I1209 04:26:50.505070 1187425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-667319
	I1209 04:26:50.522734 1187425 main.go:143] libmachine: Using SSH client type: native
	I1209 04:26:50.523054 1187425 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 33900 <nil> <nil>}
	I1209 04:26:50.523070 1187425 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-667319' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-667319/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-667319' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 04:26:50.676107 1187425 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1209 04:26:50.676133 1187425 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22081-1142328/.minikube CaCertPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22081-1142328/.minikube}
	I1209 04:26:50.676165 1187425 ubuntu.go:190] setting up certificates
	I1209 04:26:50.676182 1187425 provision.go:84] configureAuth start
	I1209 04:26:50.676245 1187425 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-667319
	I1209 04:26:50.692809 1187425 provision.go:143] copyHostCerts
	I1209 04:26:50.692850 1187425 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.pem
	I1209 04:26:50.692881 1187425 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.pem, removing ...
	I1209 04:26:50.692892 1187425 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.pem
	I1209 04:26:50.692964 1187425 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.pem (1078 bytes)
	I1209 04:26:50.693060 1187425 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22081-1142328/.minikube/cert.pem
	I1209 04:26:50.693088 1187425 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-1142328/.minikube/cert.pem, removing ...
	I1209 04:26:50.693096 1187425 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-1142328/.minikube/cert.pem
	I1209 04:26:50.693122 1187425 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22081-1142328/.minikube/cert.pem (1123 bytes)
	I1209 04:26:50.693175 1187425 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22081-1142328/.minikube/key.pem
	I1209 04:26:50.693199 1187425 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-1142328/.minikube/key.pem, removing ...
	I1209 04:26:50.693206 1187425 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-1142328/.minikube/key.pem
	I1209 04:26:50.693233 1187425 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22081-1142328/.minikube/key.pem (1675 bytes)
	I1209 04:26:50.693287 1187425 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca-key.pem org=jenkins.functional-667319 san=[127.0.0.1 192.168.49.2 functional-667319 localhost minikube]
	I1209 04:26:50.808459 1187425 provision.go:177] copyRemoteCerts
	I1209 04:26:50.808521 1187425 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 04:26:50.808568 1187425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-667319
	I1209 04:26:50.825015 1187425 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/functional-667319/id_rsa Username:docker}
	I1209 04:26:50.931904 1187425 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1209 04:26:50.931970 1187425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1209 04:26:50.950373 1187425 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1209 04:26:50.950430 1187425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1209 04:26:50.967052 1187425 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1209 04:26:50.967110 1187425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1209 04:26:50.984302 1187425 provision.go:87] duration metric: took 308.098174ms to configureAuth
	I1209 04:26:50.984386 1187425 ubuntu.go:206] setting minikube options for container-runtime
	I1209 04:26:50.984596 1187425 config.go:182] Loaded profile config "functional-667319": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1209 04:26:50.984634 1187425 machine.go:97] duration metric: took 834.335015ms to provisionDockerMachine
	I1209 04:26:50.984656 1187425 start.go:293] postStartSetup for "functional-667319" (driver="docker")
	I1209 04:26:50.984680 1187425 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 04:26:50.984759 1187425 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 04:26:50.984834 1187425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-667319
	I1209 04:26:51.005808 1187425 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/functional-667319/id_rsa Username:docker}
	I1209 04:26:51.112821 1187425 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 04:26:51.116496 1187425 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1209 04:26:51.116518 1187425 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1209 04:26:51.116523 1187425 command_runner.go:130] > VERSION_ID="12"
	I1209 04:26:51.116528 1187425 command_runner.go:130] > VERSION="12 (bookworm)"
	I1209 04:26:51.116532 1187425 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1209 04:26:51.116536 1187425 command_runner.go:130] > ID=debian
	I1209 04:26:51.116540 1187425 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1209 04:26:51.116545 1187425 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1209 04:26:51.116554 1187425 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1209 04:26:51.116627 1187425 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1209 04:26:51.116648 1187425 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1209 04:26:51.116659 1187425 filesync.go:126] Scanning /home/jenkins/minikube-integration/22081-1142328/.minikube/addons for local assets ...
	I1209 04:26:51.116715 1187425 filesync.go:126] Scanning /home/jenkins/minikube-integration/22081-1142328/.minikube/files for local assets ...
	I1209 04:26:51.116799 1187425 filesync.go:149] local asset: /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/ssl/certs/11442312.pem -> 11442312.pem in /etc/ssl/certs
	I1209 04:26:51.116806 1187425 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/ssl/certs/11442312.pem -> /etc/ssl/certs/11442312.pem
	I1209 04:26:51.116882 1187425 filesync.go:149] local asset: /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/test/nested/copy/1144231/hosts -> hosts in /etc/test/nested/copy/1144231
	I1209 04:26:51.116886 1187425 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/test/nested/copy/1144231/hosts -> /etc/test/nested/copy/1144231/hosts
	I1209 04:26:51.116933 1187425 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1144231
	I1209 04:26:51.124908 1187425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/ssl/certs/11442312.pem --> /etc/ssl/certs/11442312.pem (1708 bytes)
	I1209 04:26:51.143368 1187425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/test/nested/copy/1144231/hosts --> /etc/test/nested/copy/1144231/hosts (40 bytes)
	I1209 04:26:51.161824 1187425 start.go:296] duration metric: took 177.139225ms for postStartSetup
	I1209 04:26:51.161916 1187425 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1209 04:26:51.161982 1187425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-667319
	I1209 04:26:51.181271 1187425 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/functional-667319/id_rsa Username:docker}
	I1209 04:26:51.284406 1187425 command_runner.go:130] > 12%
	I1209 04:26:51.284922 1187425 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1209 04:26:51.288619 1187425 command_runner.go:130] > 172G
	I1209 04:26:51.288953 1187425 fix.go:56] duration metric: took 1.163071262s for fixHost
	I1209 04:26:51.288968 1187425 start.go:83] releasing machines lock for "functional-667319", held for 1.163111146s
	I1209 04:26:51.289042 1187425 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-667319
	I1209 04:26:51.305835 1187425 ssh_runner.go:195] Run: cat /version.json
	I1209 04:26:51.305885 1187425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-667319
	I1209 04:26:51.305897 1187425 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 04:26:51.305950 1187425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-667319
	I1209 04:26:51.325384 1187425 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/functional-667319/id_rsa Username:docker}
	I1209 04:26:51.327293 1187425 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/functional-667319/id_rsa Username:docker}
	I1209 04:26:51.427270 1187425 command_runner.go:130] > {"iso_version": "v1.37.0-1764843329-22032", "kicbase_version": "v0.0.48-1765184860-22066", "minikube_version": "v1.37.0", "commit": "27bcd52be11288bda2f9abde063aa47b22607695"}
	I1209 04:26:51.427541 1187425 ssh_runner.go:195] Run: systemctl --version
	I1209 04:26:51.517549 1187425 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1209 04:26:51.520210 1187425 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1209 04:26:51.520243 1187425 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1209 04:26:51.520320 1187425 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1209 04:26:51.524536 1187425 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1209 04:26:51.524574 1187425 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 04:26:51.524644 1187425 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 04:26:51.532138 1187425 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1209 04:26:51.532170 1187425 start.go:496] detecting cgroup driver to use...
	I1209 04:26:51.532202 1187425 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1209 04:26:51.532264 1187425 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1209 04:26:51.547055 1187425 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1209 04:26:51.559544 1187425 docker.go:218] disabling cri-docker service (if available) ...
	I1209 04:26:51.559644 1187425 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 04:26:51.574821 1187425 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 04:26:51.587447 1187425 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 04:26:51.703845 1187425 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 04:26:51.839863 1187425 docker.go:234] disabling docker service ...
	I1209 04:26:51.839930 1187425 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 04:26:51.856255 1187425 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 04:26:51.869081 1187425 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 04:26:51.995560 1187425 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 04:26:52.125293 1187425 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 04:26:52.137749 1187425 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 04:26:52.150135 1187425 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1209 04:26:52.151507 1187425 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1209 04:26:52.160197 1187425 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1209 04:26:52.168921 1187425 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1209 04:26:52.169008 1187425 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1209 04:26:52.177592 1187425 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1209 04:26:52.185997 1187425 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1209 04:26:52.194259 1187425 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1209 04:26:52.202620 1187425 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 04:26:52.210466 1187425 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1209 04:26:52.219232 1187425 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1209 04:26:52.227579 1187425 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1209 04:26:52.236059 1187425 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 04:26:52.242619 1187425 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1209 04:26:52.243485 1187425 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 04:26:52.250890 1187425 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 04:26:52.361246 1187425 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1209 04:26:52.490552 1187425 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1209 04:26:52.490653 1187425 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1209 04:26:52.497112 1187425 command_runner.go:130] >   File: /run/containerd/containerd.sock
	I1209 04:26:52.497174 1187425 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1209 04:26:52.497206 1187425 command_runner.go:130] > Device: 0,72	Inode: 1613        Links: 1
	I1209 04:26:52.497227 1187425 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1209 04:26:52.497247 1187425 command_runner.go:130] > Access: 2025-12-09 04:26:52.442263978 +0000
	I1209 04:26:52.497281 1187425 command_runner.go:130] > Modify: 2025-12-09 04:26:52.442263978 +0000
	I1209 04:26:52.497301 1187425 command_runner.go:130] > Change: 2025-12-09 04:26:52.442263978 +0000
	I1209 04:26:52.497319 1187425 command_runner.go:130] >  Birth: -
	I1209 04:26:52.497534 1187425 start.go:564] Will wait 60s for crictl version
	I1209 04:26:52.497619 1187425 ssh_runner.go:195] Run: which crictl
	I1209 04:26:52.501257 1187425 command_runner.go:130] > /usr/local/bin/crictl
	I1209 04:26:52.502001 1187425 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1209 04:26:52.535942 1187425 command_runner.go:130] > Version:  0.1.0
	I1209 04:26:52.535964 1187425 command_runner.go:130] > RuntimeName:  containerd
	I1209 04:26:52.535970 1187425 command_runner.go:130] > RuntimeVersion:  v2.2.0
	I1209 04:26:52.535975 1187425 command_runner.go:130] > RuntimeApiVersion:  v1
	I1209 04:26:52.535985 1187425 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1209 04:26:52.536096 1187425 ssh_runner.go:195] Run: containerd --version
	I1209 04:26:52.556939 1187425 command_runner.go:130] > containerd containerd.io v2.2.0 1c4457e00facac03ce1d75f7b6777a7a851e5c41
	I1209 04:26:52.562389 1187425 ssh_runner.go:195] Run: containerd --version
	I1209 04:26:52.582187 1187425 command_runner.go:130] > containerd containerd.io v2.2.0 1c4457e00facac03ce1d75f7b6777a7a851e5c41
	I1209 04:26:52.587659 1187425 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1209 04:26:52.590705 1187425 cli_runner.go:164] Run: docker network inspect functional-667319 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1209 04:26:52.606900 1187425 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1209 04:26:52.610849 1187425 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1209 04:26:52.610974 1187425 kubeadm.go:884] updating cluster {Name:functional-667319 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-667319 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 04:26:52.611074 1187425 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1209 04:26:52.611135 1187425 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 04:26:52.634142 1187425 command_runner.go:130] > {
	I1209 04:26:52.634161 1187425 command_runner.go:130] >   "images":  [
	I1209 04:26:52.634166 1187425 command_runner.go:130] >     {
	I1209 04:26:52.634175 1187425 command_runner.go:130] >       "id":  "sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1209 04:26:52.634180 1187425 command_runner.go:130] >       "repoTags":  [
	I1209 04:26:52.634186 1187425 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1209 04:26:52.634190 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.634194 1187425 command_runner.go:130] >       "repoDigests":  [
	I1209 04:26:52.634210 1187425 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"
	I1209 04:26:52.634213 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.634218 1187425 command_runner.go:130] >       "size":  "40636774",
	I1209 04:26:52.634222 1187425 command_runner.go:130] >       "username":  "",
	I1209 04:26:52.634230 1187425 command_runner.go:130] >       "pinned":  false
	I1209 04:26:52.634233 1187425 command_runner.go:130] >     },
	I1209 04:26:52.634236 1187425 command_runner.go:130] >     {
	I1209 04:26:52.634246 1187425 command_runner.go:130] >       "id":  "sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1209 04:26:52.634251 1187425 command_runner.go:130] >       "repoTags":  [
	I1209 04:26:52.634256 1187425 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1209 04:26:52.634259 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.634263 1187425 command_runner.go:130] >       "repoDigests":  [
	I1209 04:26:52.634271 1187425 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1209 04:26:52.634274 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.634278 1187425 command_runner.go:130] >       "size":  "8034419",
	I1209 04:26:52.634282 1187425 command_runner.go:130] >       "username":  "",
	I1209 04:26:52.634286 1187425 command_runner.go:130] >       "pinned":  false
	I1209 04:26:52.634289 1187425 command_runner.go:130] >     },
	I1209 04:26:52.634292 1187425 command_runner.go:130] >     {
	I1209 04:26:52.634298 1187425 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1209 04:26:52.634302 1187425 command_runner.go:130] >       "repoTags":  [
	I1209 04:26:52.634307 1187425 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1209 04:26:52.634310 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.634317 1187425 command_runner.go:130] >       "repoDigests":  [
	I1209 04:26:52.634325 1187425 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"
	I1209 04:26:52.634328 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.634333 1187425 command_runner.go:130] >       "size":  "21168808",
	I1209 04:26:52.634337 1187425 command_runner.go:130] >       "username":  "nonroot",
	I1209 04:26:52.634341 1187425 command_runner.go:130] >       "pinned":  false
	I1209 04:26:52.634349 1187425 command_runner.go:130] >     },
	I1209 04:26:52.634355 1187425 command_runner.go:130] >     {
	I1209 04:26:52.634362 1187425 command_runner.go:130] >       "id":  "sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1209 04:26:52.634367 1187425 command_runner.go:130] >       "repoTags":  [
	I1209 04:26:52.634372 1187425 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1209 04:26:52.634375 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.634379 1187425 command_runner.go:130] >       "repoDigests":  [
	I1209 04:26:52.634387 1187425 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"
	I1209 04:26:52.634393 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.634397 1187425 command_runner.go:130] >       "size":  "21136588",
	I1209 04:26:52.634402 1187425 command_runner.go:130] >       "uid":  {
	I1209 04:26:52.634405 1187425 command_runner.go:130] >         "value":  "0"
	I1209 04:26:52.634408 1187425 command_runner.go:130] >       },
	I1209 04:26:52.634412 1187425 command_runner.go:130] >       "username":  "",
	I1209 04:26:52.634415 1187425 command_runner.go:130] >       "pinned":  false
	I1209 04:26:52.634418 1187425 command_runner.go:130] >     },
	I1209 04:26:52.634421 1187425 command_runner.go:130] >     {
	I1209 04:26:52.634428 1187425 command_runner.go:130] >       "id":  "sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1209 04:26:52.634431 1187425 command_runner.go:130] >       "repoTags":  [
	I1209 04:26:52.634437 1187425 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1209 04:26:52.634440 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.634443 1187425 command_runner.go:130] >       "repoDigests":  [
	I1209 04:26:52.634451 1187425 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58"
	I1209 04:26:52.634453 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.634457 1187425 command_runner.go:130] >       "size":  "24678359",
	I1209 04:26:52.634461 1187425 command_runner.go:130] >       "uid":  {
	I1209 04:26:52.634468 1187425 command_runner.go:130] >         "value":  "0"
	I1209 04:26:52.634471 1187425 command_runner.go:130] >       },
	I1209 04:26:52.634474 1187425 command_runner.go:130] >       "username":  "",
	I1209 04:26:52.634478 1187425 command_runner.go:130] >       "pinned":  false
	I1209 04:26:52.634480 1187425 command_runner.go:130] >     },
	I1209 04:26:52.634483 1187425 command_runner.go:130] >     {
	I1209 04:26:52.634490 1187425 command_runner.go:130] >       "id":  "sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1209 04:26:52.634493 1187425 command_runner.go:130] >       "repoTags":  [
	I1209 04:26:52.634499 1187425 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1209 04:26:52.634501 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.634505 1187425 command_runner.go:130] >       "repoDigests":  [
	I1209 04:26:52.634513 1187425 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d"
	I1209 04:26:52.634516 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.634520 1187425 command_runner.go:130] >       "size":  "20661043",
	I1209 04:26:52.634523 1187425 command_runner.go:130] >       "uid":  {
	I1209 04:26:52.634532 1187425 command_runner.go:130] >         "value":  "0"
	I1209 04:26:52.634535 1187425 command_runner.go:130] >       },
	I1209 04:26:52.634539 1187425 command_runner.go:130] >       "username":  "",
	I1209 04:26:52.634543 1187425 command_runner.go:130] >       "pinned":  false
	I1209 04:26:52.634546 1187425 command_runner.go:130] >     },
	I1209 04:26:52.634548 1187425 command_runner.go:130] >     {
	I1209 04:26:52.634555 1187425 command_runner.go:130] >       "id":  "sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1209 04:26:52.634558 1187425 command_runner.go:130] >       "repoTags":  [
	I1209 04:26:52.634563 1187425 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1209 04:26:52.634566 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.634569 1187425 command_runner.go:130] >       "repoDigests":  [
	I1209 04:26:52.634577 1187425 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1209 04:26:52.634580 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.634583 1187425 command_runner.go:130] >       "size":  "22429671",
	I1209 04:26:52.634587 1187425 command_runner.go:130] >       "username":  "",
	I1209 04:26:52.634591 1187425 command_runner.go:130] >       "pinned":  false
	I1209 04:26:52.634594 1187425 command_runner.go:130] >     },
	I1209 04:26:52.634597 1187425 command_runner.go:130] >     {
	I1209 04:26:52.634604 1187425 command_runner.go:130] >       "id":  "sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1209 04:26:52.634607 1187425 command_runner.go:130] >       "repoTags":  [
	I1209 04:26:52.634613 1187425 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1209 04:26:52.634616 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.634620 1187425 command_runner.go:130] >       "repoDigests":  [
	I1209 04:26:52.634627 1187425 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6"
	I1209 04:26:52.634630 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.634634 1187425 command_runner.go:130] >       "size":  "15391364",
	I1209 04:26:52.634638 1187425 command_runner.go:130] >       "uid":  {
	I1209 04:26:52.634641 1187425 command_runner.go:130] >         "value":  "0"
	I1209 04:26:52.634644 1187425 command_runner.go:130] >       },
	I1209 04:26:52.634649 1187425 command_runner.go:130] >       "username":  "",
	I1209 04:26:52.634653 1187425 command_runner.go:130] >       "pinned":  false
	I1209 04:26:52.634655 1187425 command_runner.go:130] >     },
	I1209 04:26:52.634659 1187425 command_runner.go:130] >     {
	I1209 04:26:52.634670 1187425 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1209 04:26:52.634674 1187425 command_runner.go:130] >       "repoTags":  [
	I1209 04:26:52.634678 1187425 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1209 04:26:52.634681 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.634685 1187425 command_runner.go:130] >       "repoDigests":  [
	I1209 04:26:52.634693 1187425 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"
	I1209 04:26:52.634695 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.634699 1187425 command_runner.go:130] >       "size":  "267939",
	I1209 04:26:52.634703 1187425 command_runner.go:130] >       "uid":  {
	I1209 04:26:52.634706 1187425 command_runner.go:130] >         "value":  "65535"
	I1209 04:26:52.634709 1187425 command_runner.go:130] >       },
	I1209 04:26:52.634713 1187425 command_runner.go:130] >       "username":  "",
	I1209 04:26:52.634717 1187425 command_runner.go:130] >       "pinned":  true
	I1209 04:26:52.634720 1187425 command_runner.go:130] >     }
	I1209 04:26:52.634723 1187425 command_runner.go:130] >   ]
	I1209 04:26:52.634726 1187425 command_runner.go:130] > }
	I1209 04:26:52.636238 1187425 containerd.go:627] all images are preloaded for containerd runtime.
	I1209 04:26:52.636265 1187425 containerd.go:534] Images already preloaded, skipping extraction
	I1209 04:26:52.636328 1187425 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 04:26:52.662300 1187425 command_runner.go:130] > {
	I1209 04:26:52.662318 1187425 command_runner.go:130] >   "images":  [
	I1209 04:26:52.662323 1187425 command_runner.go:130] >     {
	I1209 04:26:52.662332 1187425 command_runner.go:130] >       "id":  "sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1209 04:26:52.662349 1187425 command_runner.go:130] >       "repoTags":  [
	I1209 04:26:52.662355 1187425 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1209 04:26:52.662358 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.662363 1187425 command_runner.go:130] >       "repoDigests":  [
	I1209 04:26:52.662375 1187425 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"
	I1209 04:26:52.662379 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.662383 1187425 command_runner.go:130] >       "size":  "40636774",
	I1209 04:26:52.662388 1187425 command_runner.go:130] >       "username":  "",
	I1209 04:26:52.662392 1187425 command_runner.go:130] >       "pinned":  false
	I1209 04:26:52.662395 1187425 command_runner.go:130] >     },
	I1209 04:26:52.662398 1187425 command_runner.go:130] >     {
	I1209 04:26:52.662406 1187425 command_runner.go:130] >       "id":  "sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1209 04:26:52.662410 1187425 command_runner.go:130] >       "repoTags":  [
	I1209 04:26:52.662416 1187425 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1209 04:26:52.662420 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.662424 1187425 command_runner.go:130] >       "repoDigests":  [
	I1209 04:26:52.662436 1187425 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1209 04:26:52.662440 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.662444 1187425 command_runner.go:130] >       "size":  "8034419",
	I1209 04:26:52.662448 1187425 command_runner.go:130] >       "username":  "",
	I1209 04:26:52.662452 1187425 command_runner.go:130] >       "pinned":  false
	I1209 04:26:52.662460 1187425 command_runner.go:130] >     },
	I1209 04:26:52.662463 1187425 command_runner.go:130] >     {
	I1209 04:26:52.662470 1187425 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1209 04:26:52.662474 1187425 command_runner.go:130] >       "repoTags":  [
	I1209 04:26:52.662479 1187425 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1209 04:26:52.662482 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.662488 1187425 command_runner.go:130] >       "repoDigests":  [
	I1209 04:26:52.662496 1187425 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"
	I1209 04:26:52.662500 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.662504 1187425 command_runner.go:130] >       "size":  "21168808",
	I1209 04:26:52.662508 1187425 command_runner.go:130] >       "username":  "nonroot",
	I1209 04:26:52.662512 1187425 command_runner.go:130] >       "pinned":  false
	I1209 04:26:52.662515 1187425 command_runner.go:130] >     },
	I1209 04:26:52.662519 1187425 command_runner.go:130] >     {
	I1209 04:26:52.662525 1187425 command_runner.go:130] >       "id":  "sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1209 04:26:52.662529 1187425 command_runner.go:130] >       "repoTags":  [
	I1209 04:26:52.662534 1187425 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1209 04:26:52.662538 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.662541 1187425 command_runner.go:130] >       "repoDigests":  [
	I1209 04:26:52.662549 1187425 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"
	I1209 04:26:52.662552 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.662556 1187425 command_runner.go:130] >       "size":  "21136588",
	I1209 04:26:52.662561 1187425 command_runner.go:130] >       "uid":  {
	I1209 04:26:52.662565 1187425 command_runner.go:130] >         "value":  "0"
	I1209 04:26:52.662568 1187425 command_runner.go:130] >       },
	I1209 04:26:52.662572 1187425 command_runner.go:130] >       "username":  "",
	I1209 04:26:52.662576 1187425 command_runner.go:130] >       "pinned":  false
	I1209 04:26:52.662579 1187425 command_runner.go:130] >     },
	I1209 04:26:52.662585 1187425 command_runner.go:130] >     {
	I1209 04:26:52.662592 1187425 command_runner.go:130] >       "id":  "sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1209 04:26:52.662596 1187425 command_runner.go:130] >       "repoTags":  [
	I1209 04:26:52.662601 1187425 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1209 04:26:52.662605 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.662609 1187425 command_runner.go:130] >       "repoDigests":  [
	I1209 04:26:52.662617 1187425 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58"
	I1209 04:26:52.662619 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.662624 1187425 command_runner.go:130] >       "size":  "24678359",
	I1209 04:26:52.662627 1187425 command_runner.go:130] >       "uid":  {
	I1209 04:26:52.662639 1187425 command_runner.go:130] >         "value":  "0"
	I1209 04:26:52.662642 1187425 command_runner.go:130] >       },
	I1209 04:26:52.662646 1187425 command_runner.go:130] >       "username":  "",
	I1209 04:26:52.662650 1187425 command_runner.go:130] >       "pinned":  false
	I1209 04:26:52.662653 1187425 command_runner.go:130] >     },
	I1209 04:26:52.662656 1187425 command_runner.go:130] >     {
	I1209 04:26:52.662663 1187425 command_runner.go:130] >       "id":  "sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1209 04:26:52.662667 1187425 command_runner.go:130] >       "repoTags":  [
	I1209 04:26:52.662672 1187425 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1209 04:26:52.662675 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.662679 1187425 command_runner.go:130] >       "repoDigests":  [
	I1209 04:26:52.662687 1187425 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d"
	I1209 04:26:52.662690 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.662694 1187425 command_runner.go:130] >       "size":  "20661043",
	I1209 04:26:52.662697 1187425 command_runner.go:130] >       "uid":  {
	I1209 04:26:52.662701 1187425 command_runner.go:130] >         "value":  "0"
	I1209 04:26:52.662704 1187425 command_runner.go:130] >       },
	I1209 04:26:52.662707 1187425 command_runner.go:130] >       "username":  "",
	I1209 04:26:52.662712 1187425 command_runner.go:130] >       "pinned":  false
	I1209 04:26:52.662714 1187425 command_runner.go:130] >     },
	I1209 04:26:52.662717 1187425 command_runner.go:130] >     {
	I1209 04:26:52.662725 1187425 command_runner.go:130] >       "id":  "sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1209 04:26:52.662729 1187425 command_runner.go:130] >       "repoTags":  [
	I1209 04:26:52.662737 1187425 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1209 04:26:52.662741 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.662744 1187425 command_runner.go:130] >       "repoDigests":  [
	I1209 04:26:52.662752 1187425 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1209 04:26:52.662755 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.662759 1187425 command_runner.go:130] >       "size":  "22429671",
	I1209 04:26:52.662763 1187425 command_runner.go:130] >       "username":  "",
	I1209 04:26:52.662767 1187425 command_runner.go:130] >       "pinned":  false
	I1209 04:26:52.662770 1187425 command_runner.go:130] >     },
	I1209 04:26:52.662774 1187425 command_runner.go:130] >     {
	I1209 04:26:52.662781 1187425 command_runner.go:130] >       "id":  "sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1209 04:26:52.662785 1187425 command_runner.go:130] >       "repoTags":  [
	I1209 04:26:52.662791 1187425 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1209 04:26:52.662794 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.662798 1187425 command_runner.go:130] >       "repoDigests":  [
	I1209 04:26:52.662805 1187425 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6"
	I1209 04:26:52.662808 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.662813 1187425 command_runner.go:130] >       "size":  "15391364",
	I1209 04:26:52.662816 1187425 command_runner.go:130] >       "uid":  {
	I1209 04:26:52.662820 1187425 command_runner.go:130] >         "value":  "0"
	I1209 04:26:52.662823 1187425 command_runner.go:130] >       },
	I1209 04:26:52.662827 1187425 command_runner.go:130] >       "username":  "",
	I1209 04:26:52.662831 1187425 command_runner.go:130] >       "pinned":  false
	I1209 04:26:52.662834 1187425 command_runner.go:130] >     },
	I1209 04:26:52.662837 1187425 command_runner.go:130] >     {
	I1209 04:26:52.662843 1187425 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1209 04:26:52.662847 1187425 command_runner.go:130] >       "repoTags":  [
	I1209 04:26:52.662852 1187425 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1209 04:26:52.662855 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.662858 1187425 command_runner.go:130] >       "repoDigests":  [
	I1209 04:26:52.662866 1187425 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"
	I1209 04:26:52.662869 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.662873 1187425 command_runner.go:130] >       "size":  "267939",
	I1209 04:26:52.662881 1187425 command_runner.go:130] >       "uid":  {
	I1209 04:26:52.662886 1187425 command_runner.go:130] >         "value":  "65535"
	I1209 04:26:52.662890 1187425 command_runner.go:130] >       },
	I1209 04:26:52.662894 1187425 command_runner.go:130] >       "username":  "",
	I1209 04:26:52.662897 1187425 command_runner.go:130] >       "pinned":  true
	I1209 04:26:52.662900 1187425 command_runner.go:130] >     }
	I1209 04:26:52.662903 1187425 command_runner.go:130] >   ]
	I1209 04:26:52.662906 1187425 command_runner.go:130] > }
	I1209 04:26:52.665193 1187425 containerd.go:627] all images are preloaded for containerd runtime.
	I1209 04:26:52.665212 1187425 cache_images.go:86] Images are preloaded, skipping loading
	I1209 04:26:52.665219 1187425 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
	I1209 04:26:52.665322 1187425 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-667319 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-667319 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 04:26:52.665384 1187425 ssh_runner.go:195] Run: sudo crictl info
	I1209 04:26:52.686718 1187425 command_runner.go:130] > {
	I1209 04:26:52.686786 1187425 command_runner.go:130] >   "cniconfig": {
	I1209 04:26:52.686805 1187425 command_runner.go:130] >     "Networks": [
	I1209 04:26:52.686825 1187425 command_runner.go:130] >       {
	I1209 04:26:52.686864 1187425 command_runner.go:130] >         "Config": {
	I1209 04:26:52.686886 1187425 command_runner.go:130] >           "CNIVersion": "0.3.1",
	I1209 04:26:52.686905 1187425 command_runner.go:130] >           "Name": "cni-loopback",
	I1209 04:26:52.686923 1187425 command_runner.go:130] >           "Plugins": [
	I1209 04:26:52.686940 1187425 command_runner.go:130] >             {
	I1209 04:26:52.686967 1187425 command_runner.go:130] >               "Network": {
	I1209 04:26:52.686991 1187425 command_runner.go:130] >                 "ipam": {},
	I1209 04:26:52.687011 1187425 command_runner.go:130] >                 "type": "loopback"
	I1209 04:26:52.687028 1187425 command_runner.go:130] >               },
	I1209 04:26:52.687048 1187425 command_runner.go:130] >               "Source": "{\"type\":\"loopback\"}"
	I1209 04:26:52.687074 1187425 command_runner.go:130] >             }
	I1209 04:26:52.687097 1187425 command_runner.go:130] >           ],
	I1209 04:26:52.687120 1187425 command_runner.go:130] >           "Source": "{\n\"cniVersion\": \"0.3.1\",\n\"name\": \"cni-loopback\",\n\"plugins\": [{\n  \"type\": \"loopback\"\n}]\n}"
	I1209 04:26:52.687138 1187425 command_runner.go:130] >         },
	I1209 04:26:52.687160 1187425 command_runner.go:130] >         "IFName": "lo"
	I1209 04:26:52.687191 1187425 command_runner.go:130] >       }
	I1209 04:26:52.687207 1187425 command_runner.go:130] >     ],
	I1209 04:26:52.687225 1187425 command_runner.go:130] >     "PluginConfDir": "/etc/cni/net.d",
	I1209 04:26:52.687243 1187425 command_runner.go:130] >     "PluginDirs": [
	I1209 04:26:52.687272 1187425 command_runner.go:130] >       "/opt/cni/bin"
	I1209 04:26:52.687293 1187425 command_runner.go:130] >     ],
	I1209 04:26:52.687317 1187425 command_runner.go:130] >     "PluginMaxConfNum": 1,
	I1209 04:26:52.687334 1187425 command_runner.go:130] >     "Prefix": "eth"
	I1209 04:26:52.687351 1187425 command_runner.go:130] >   },
	I1209 04:26:52.687378 1187425 command_runner.go:130] >   "config": {
	I1209 04:26:52.687401 1187425 command_runner.go:130] >     "cdiSpecDirs": [
	I1209 04:26:52.687418 1187425 command_runner.go:130] >       "/etc/cdi",
	I1209 04:26:52.687438 1187425 command_runner.go:130] >       "/var/run/cdi"
	I1209 04:26:52.687457 1187425 command_runner.go:130] >     ],
	I1209 04:26:52.687483 1187425 command_runner.go:130] >     "cni": {
	I1209 04:26:52.687505 1187425 command_runner.go:130] >       "binDir": "",
	I1209 04:26:52.687560 1187425 command_runner.go:130] >       "binDirs": [
	I1209 04:26:52.687588 1187425 command_runner.go:130] >         "/opt/cni/bin"
	I1209 04:26:52.687609 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.687628 1187425 command_runner.go:130] >       "confDir": "/etc/cni/net.d",
	I1209 04:26:52.687646 1187425 command_runner.go:130] >       "confTemplate": "",
	I1209 04:26:52.687665 1187425 command_runner.go:130] >       "ipPref": "",
	I1209 04:26:52.687692 1187425 command_runner.go:130] >       "maxConfNum": 1,
	I1209 04:26:52.687715 1187425 command_runner.go:130] >       "setupSerially": false,
	I1209 04:26:52.687733 1187425 command_runner.go:130] >       "useInternalLoopback": false
	I1209 04:26:52.687749 1187425 command_runner.go:130] >     },
	I1209 04:26:52.687775 1187425 command_runner.go:130] >     "containerd": {
	I1209 04:26:52.687802 1187425 command_runner.go:130] >       "defaultRuntimeName": "runc",
	I1209 04:26:52.687825 1187425 command_runner.go:130] >       "ignoreBlockIONotEnabledErrors": false,
	I1209 04:26:52.687845 1187425 command_runner.go:130] >       "ignoreRdtNotEnabledErrors": false,
	I1209 04:26:52.687861 1187425 command_runner.go:130] >       "runtimes": {
	I1209 04:26:52.687878 1187425 command_runner.go:130] >         "runc": {
	I1209 04:26:52.687905 1187425 command_runner.go:130] >           "ContainerAnnotations": null,
	I1209 04:26:52.687929 1187425 command_runner.go:130] >           "PodAnnotations": null,
	I1209 04:26:52.687948 1187425 command_runner.go:130] >           "baseRuntimeSpec": "",
	I1209 04:26:52.687965 1187425 command_runner.go:130] >           "cgroupWritable": false,
	I1209 04:26:52.687982 1187425 command_runner.go:130] >           "cniConfDir": "",
	I1209 04:26:52.688009 1187425 command_runner.go:130] >           "cniMaxConfNum": 0,
	I1209 04:26:52.688042 1187425 command_runner.go:130] >           "io_type": "",
	I1209 04:26:52.688055 1187425 command_runner.go:130] >           "options": {
	I1209 04:26:52.688060 1187425 command_runner.go:130] >             "BinaryName": "",
	I1209 04:26:52.688065 1187425 command_runner.go:130] >             "CriuImagePath": "",
	I1209 04:26:52.688070 1187425 command_runner.go:130] >             "CriuWorkPath": "",
	I1209 04:26:52.688078 1187425 command_runner.go:130] >             "IoGid": 0,
	I1209 04:26:52.688082 1187425 command_runner.go:130] >             "IoUid": 0,
	I1209 04:26:52.688086 1187425 command_runner.go:130] >             "NoNewKeyring": false,
	I1209 04:26:52.688093 1187425 command_runner.go:130] >             "Root": "",
	I1209 04:26:52.688097 1187425 command_runner.go:130] >             "ShimCgroup": "",
	I1209 04:26:52.688109 1187425 command_runner.go:130] >             "SystemdCgroup": false
	I1209 04:26:52.688113 1187425 command_runner.go:130] >           },
	I1209 04:26:52.688118 1187425 command_runner.go:130] >           "privileged_without_host_devices": false,
	I1209 04:26:52.688128 1187425 command_runner.go:130] >           "privileged_without_host_devices_all_devices_allowed": false,
	I1209 04:26:52.688138 1187425 command_runner.go:130] >           "runtimePath": "",
	I1209 04:26:52.688145 1187425 command_runner.go:130] >           "runtimeType": "io.containerd.runc.v2",
	I1209 04:26:52.688153 1187425 command_runner.go:130] >           "sandboxer": "podsandbox",
	I1209 04:26:52.688157 1187425 command_runner.go:130] >           "snapshotter": ""
	I1209 04:26:52.688161 1187425 command_runner.go:130] >         }
	I1209 04:26:52.688164 1187425 command_runner.go:130] >       }
	I1209 04:26:52.688167 1187425 command_runner.go:130] >     },
	I1209 04:26:52.688181 1187425 command_runner.go:130] >     "containerdEndpoint": "/run/containerd/containerd.sock",
	I1209 04:26:52.688190 1187425 command_runner.go:130] >     "containerdRootDir": "/var/lib/containerd",
	I1209 04:26:52.688198 1187425 command_runner.go:130] >     "device_ownership_from_security_context": false,
	I1209 04:26:52.688205 1187425 command_runner.go:130] >     "disableApparmor": false,
	I1209 04:26:52.688210 1187425 command_runner.go:130] >     "disableHugetlbController": true,
	I1209 04:26:52.688218 1187425 command_runner.go:130] >     "disableProcMount": false,
	I1209 04:26:52.688223 1187425 command_runner.go:130] >     "drainExecSyncIOTimeout": "0s",
	I1209 04:26:52.688231 1187425 command_runner.go:130] >     "enableCDI": true,
	I1209 04:26:52.688235 1187425 command_runner.go:130] >     "enableSelinux": false,
	I1209 04:26:52.688240 1187425 command_runner.go:130] >     "enableUnprivilegedICMP": true,
	I1209 04:26:52.688248 1187425 command_runner.go:130] >     "enableUnprivilegedPorts": true,
	I1209 04:26:52.688253 1187425 command_runner.go:130] >     "ignoreDeprecationWarnings": null,
	I1209 04:26:52.688259 1187425 command_runner.go:130] >     "ignoreImageDefinedVolumes": false,
	I1209 04:26:52.688269 1187425 command_runner.go:130] >     "maxContainerLogLineSize": 16384,
	I1209 04:26:52.688278 1187425 command_runner.go:130] >     "netnsMountsUnderStateDir": false,
	I1209 04:26:52.688282 1187425 command_runner.go:130] >     "restrictOOMScoreAdj": false,
	I1209 04:26:52.688293 1187425 command_runner.go:130] >     "rootDir": "/var/lib/containerd/io.containerd.grpc.v1.cri",
	I1209 04:26:52.688297 1187425 command_runner.go:130] >     "selinuxCategoryRange": 1024,
	I1209 04:26:52.688306 1187425 command_runner.go:130] >     "stateDir": "/run/containerd/io.containerd.grpc.v1.cri",
	I1209 04:26:52.688312 1187425 command_runner.go:130] >     "tolerateMissingHugetlbController": true,
	I1209 04:26:52.688320 1187425 command_runner.go:130] >     "unsetSeccompProfile": ""
	I1209 04:26:52.688323 1187425 command_runner.go:130] >   },
	I1209 04:26:52.688327 1187425 command_runner.go:130] >   "features": {
	I1209 04:26:52.688332 1187425 command_runner.go:130] >     "supplemental_groups_policy": true
	I1209 04:26:52.688337 1187425 command_runner.go:130] >   },
	I1209 04:26:52.688341 1187425 command_runner.go:130] >   "golang": "go1.24.9",
	I1209 04:26:52.688355 1187425 command_runner.go:130] >   "lastCNILoadStatus": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1209 04:26:52.688368 1187425 command_runner.go:130] >   "lastCNILoadStatus.default": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1209 04:26:52.688376 1187425 command_runner.go:130] >   "runtimeHandlers": [
	I1209 04:26:52.688379 1187425 command_runner.go:130] >     {
	I1209 04:26:52.688388 1187425 command_runner.go:130] >       "features": {
	I1209 04:26:52.688394 1187425 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1209 04:26:52.688403 1187425 command_runner.go:130] >         "user_namespaces": true
	I1209 04:26:52.688406 1187425 command_runner.go:130] >       }
	I1209 04:26:52.688409 1187425 command_runner.go:130] >     },
	I1209 04:26:52.688412 1187425 command_runner.go:130] >     {
	I1209 04:26:52.688416 1187425 command_runner.go:130] >       "features": {
	I1209 04:26:52.688423 1187425 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1209 04:26:52.688432 1187425 command_runner.go:130] >         "user_namespaces": true
	I1209 04:26:52.688435 1187425 command_runner.go:130] >       },
	I1209 04:26:52.688439 1187425 command_runner.go:130] >       "name": "runc"
	I1209 04:26:52.688446 1187425 command_runner.go:130] >     }
	I1209 04:26:52.688449 1187425 command_runner.go:130] >   ],
	I1209 04:26:52.688457 1187425 command_runner.go:130] >   "status": {
	I1209 04:26:52.688461 1187425 command_runner.go:130] >     "conditions": [
	I1209 04:26:52.688469 1187425 command_runner.go:130] >       {
	I1209 04:26:52.688476 1187425 command_runner.go:130] >         "message": "",
	I1209 04:26:52.688484 1187425 command_runner.go:130] >         "reason": "",
	I1209 04:26:52.688488 1187425 command_runner.go:130] >         "status": true,
	I1209 04:26:52.688493 1187425 command_runner.go:130] >         "type": "RuntimeReady"
	I1209 04:26:52.688497 1187425 command_runner.go:130] >       },
	I1209 04:26:52.688502 1187425 command_runner.go:130] >       {
	I1209 04:26:52.688509 1187425 command_runner.go:130] >         "message": "Network plugin returns error: cni plugin not initialized",
	I1209 04:26:52.688518 1187425 command_runner.go:130] >         "reason": "NetworkPluginNotReady",
	I1209 04:26:52.688522 1187425 command_runner.go:130] >         "status": false,
	I1209 04:26:52.688530 1187425 command_runner.go:130] >         "type": "NetworkReady"
	I1209 04:26:52.688534 1187425 command_runner.go:130] >       },
	I1209 04:26:52.688541 1187425 command_runner.go:130] >       {
	I1209 04:26:52.688568 1187425 command_runner.go:130] >         "message": "{\"io.containerd.deprecation/cgroup-v1\":\"The support for cgroup v1 is deprecated since containerd v2.2 and will be removed by no later than May 2029. Upgrade the host to use cgroup v2.\"}",
	I1209 04:26:52.688578 1187425 command_runner.go:130] >         "reason": "ContainerdHasDeprecationWarnings",
	I1209 04:26:52.688584 1187425 command_runner.go:130] >         "status": false,
	I1209 04:26:52.688590 1187425 command_runner.go:130] >         "type": "ContainerdHasNoDeprecationWarnings"
	I1209 04:26:52.688595 1187425 command_runner.go:130] >       }
	I1209 04:26:52.688598 1187425 command_runner.go:130] >     ]
	I1209 04:26:52.688606 1187425 command_runner.go:130] >   }
	I1209 04:26:52.688609 1187425 command_runner.go:130] > }
	I1209 04:26:52.690920 1187425 cni.go:84] Creating CNI manager for ""
	I1209 04:26:52.690942 1187425 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1209 04:26:52.690965 1187425 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1209 04:26:52.690987 1187425 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-667319 NodeName:functional-667319 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1209 04:26:52.691101 1187425 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-667319"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 04:26:52.691179 1187425 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1209 04:26:52.697985 1187425 command_runner.go:130] > kubeadm
	I1209 04:26:52.698006 1187425 command_runner.go:130] > kubectl
	I1209 04:26:52.698010 1187425 command_runner.go:130] > kubelet
	I1209 04:26:52.698825 1187425 binaries.go:51] Found k8s binaries, skipping transfer
	I1209 04:26:52.698896 1187425 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1209 04:26:52.706638 1187425 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1209 04:26:52.718822 1187425 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1209 04:26:52.731825 1187425 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1209 04:26:52.744962 1187425 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1209 04:26:52.748733 1187425 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1209 04:26:52.748987 1187425 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 04:26:52.855986 1187425 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 04:26:53.181367 1187425 certs.go:69] Setting up /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319 for IP: 192.168.49.2
	I1209 04:26:53.181392 1187425 certs.go:195] generating shared ca certs ...
	I1209 04:26:53.181408 1187425 certs.go:227] acquiring lock for ca certs: {Name:mk15788702f8c4e23b5aeab3f44961d296fab259 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 04:26:53.181570 1187425 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.key
	I1209 04:26:53.181618 1187425 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/proxy-client-ca.key
	I1209 04:26:53.181630 1187425 certs.go:257] generating profile certs ...
	I1209 04:26:53.181740 1187425 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/client.key
	I1209 04:26:53.181805 1187425 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/apiserver.key.c80eb595
	I1209 04:26:53.181848 1187425 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/proxy-client.key
	I1209 04:26:53.181859 1187425 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1209 04:26:53.181873 1187425 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1209 04:26:53.181889 1187425 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1142328/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1209 04:26:53.181899 1187425 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1142328/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1209 04:26:53.181914 1187425 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1209 04:26:53.181925 1187425 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1209 04:26:53.181943 1187425 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1209 04:26:53.181954 1187425 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1209 04:26:53.182004 1187425 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/1144231.pem (1338 bytes)
	W1209 04:26:53.182038 1187425 certs.go:480] ignoring /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/1144231_empty.pem, impossibly tiny 0 bytes
	I1209 04:26:53.182050 1187425 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca-key.pem (1679 bytes)
	I1209 04:26:53.182079 1187425 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem (1078 bytes)
	I1209 04:26:53.182105 1187425 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/cert.pem (1123 bytes)
	I1209 04:26:53.182136 1187425 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/key.pem (1675 bytes)
	I1209 04:26:53.182187 1187425 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/ssl/certs/11442312.pem (1708 bytes)
	I1209 04:26:53.182243 1187425 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/ssl/certs/11442312.pem -> /usr/share/ca-certificates/11442312.pem
	I1209 04:26:53.182260 1187425 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1209 04:26:53.182277 1187425 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/1144231.pem -> /usr/share/ca-certificates/1144231.pem
	I1209 04:26:53.182817 1187425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 04:26:53.202751 1187425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1209 04:26:53.220083 1187425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 04:26:53.237728 1187425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1209 04:26:53.255002 1187425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1209 04:26:53.271923 1187425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1209 04:26:53.289401 1187425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 04:26:53.306616 1187425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1209 04:26:53.323564 1187425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/ssl/certs/11442312.pem --> /usr/share/ca-certificates/11442312.pem (1708 bytes)
	I1209 04:26:53.340526 1187425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 04:26:53.357221 1187425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/1144231.pem --> /usr/share/ca-certificates/1144231.pem (1338 bytes)
	I1209 04:26:53.373705 1187425 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 04:26:53.386274 1187425 ssh_runner.go:195] Run: openssl version
	I1209 04:26:53.391826 1187425 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1209 04:26:53.392252 1187425 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/11442312.pem
	I1209 04:26:53.399306 1187425 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/11442312.pem /etc/ssl/certs/11442312.pem
	I1209 04:26:53.406404 1187425 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11442312.pem
	I1209 04:26:53.409862 1187425 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec  9 04:18 /usr/share/ca-certificates/11442312.pem
	I1209 04:26:53.409914 1187425 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 04:18 /usr/share/ca-certificates/11442312.pem
	I1209 04:26:53.409972 1187425 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11442312.pem
	I1209 04:26:53.450109 1187425 command_runner.go:130] > 3ec20f2e
	I1209 04:26:53.450580 1187425 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1209 04:26:53.457724 1187425 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1209 04:26:53.464857 1187425 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1209 04:26:53.472136 1187425 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 04:26:53.475789 1187425 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec  9 04:09 /usr/share/ca-certificates/minikubeCA.pem
	I1209 04:26:53.475830 1187425 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 04:09 /usr/share/ca-certificates/minikubeCA.pem
	I1209 04:26:53.475880 1187425 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 04:26:53.517012 1187425 command_runner.go:130] > b5213941
	I1209 04:26:53.517090 1187425 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1209 04:26:53.524195 1187425 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1144231.pem
	I1209 04:26:53.531059 1187425 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1144231.pem /etc/ssl/certs/1144231.pem
	I1209 04:26:53.537929 1187425 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1144231.pem
	I1209 04:26:53.541362 1187425 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec  9 04:18 /usr/share/ca-certificates/1144231.pem
	I1209 04:26:53.541587 1187425 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 04:18 /usr/share/ca-certificates/1144231.pem
	I1209 04:26:53.541670 1187425 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1144231.pem
	I1209 04:26:53.586134 1187425 command_runner.go:130] > 51391683
	I1209 04:26:53.586694 1187425 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1209 04:26:53.593775 1187425 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 04:26:53.597060 1187425 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 04:26:53.597083 1187425 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1209 04:26:53.597090 1187425 command_runner.go:130] > Device: 259,1	Inode: 1317519     Links: 1
	I1209 04:26:53.597096 1187425 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1209 04:26:53.597101 1187425 command_runner.go:130] > Access: 2025-12-09 04:22:46.557738038 +0000
	I1209 04:26:53.597107 1187425 command_runner.go:130] > Modify: 2025-12-09 04:18:42.397294101 +0000
	I1209 04:26:53.597112 1187425 command_runner.go:130] > Change: 2025-12-09 04:18:42.397294101 +0000
	I1209 04:26:53.597120 1187425 command_runner.go:130] >  Birth: 2025-12-09 04:18:42.397294101 +0000
	I1209 04:26:53.597202 1187425 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1209 04:26:53.637326 1187425 command_runner.go:130] > Certificate will not expire
	I1209 04:26:53.637892 1187425 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1209 04:26:53.678262 1187425 command_runner.go:130] > Certificate will not expire
	I1209 04:26:53.678829 1187425 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1209 04:26:53.719319 1187425 command_runner.go:130] > Certificate will not expire
	I1209 04:26:53.719397 1187425 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1209 04:26:53.760102 1187425 command_runner.go:130] > Certificate will not expire
	I1209 04:26:53.760184 1187425 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1209 04:26:53.805340 1187425 command_runner.go:130] > Certificate will not expire
	I1209 04:26:53.805854 1187425 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1209 04:26:53.846216 1187425 command_runner.go:130] > Certificate will not expire
	I1209 04:26:53.846284 1187425 kubeadm.go:401] StartCluster: {Name:functional-667319 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-667319 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 04:26:53.846701 1187425 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1209 04:26:53.846774 1187425 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 04:26:53.877891 1187425 cri.go:89] found id: ""
	I1209 04:26:53.877982 1187425 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1209 04:26:53.884657 1187425 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1209 04:26:53.884683 1187425 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1209 04:26:53.884690 1187425 command_runner.go:130] > /var/lib/minikube/etcd:
	I1209 04:26:53.885556 1187425 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1209 04:26:53.885572 1187425 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1209 04:26:53.885646 1187425 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1209 04:26:53.892789 1187425 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1209 04:26:53.893171 1187425 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-667319" does not appear in /home/jenkins/minikube-integration/22081-1142328/kubeconfig
	I1209 04:26:53.893275 1187425 kubeconfig.go:62] /home/jenkins/minikube-integration/22081-1142328/kubeconfig needs updating (will repair): [kubeconfig missing "functional-667319" cluster setting kubeconfig missing "functional-667319" context setting]
	I1209 04:26:53.893568 1187425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1142328/kubeconfig: {Name:mk0dc127429f88dc4fdfb2d110deebc58207c1b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 04:26:53.893971 1187425 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22081-1142328/kubeconfig
	I1209 04:26:53.894121 1187425 kapi.go:59] client config for functional-667319: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/client.crt", KeyFile:"/home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/client.key", CAFile:"/home/jenkins/minikube-integration/22081-1142328/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb3ec0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1209 04:26:53.894601 1187425 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1209 04:26:53.894621 1187425 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1209 04:26:53.894627 1187425 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1209 04:26:53.894636 1187425 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1209 04:26:53.894643 1187425 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1209 04:26:53.894942 1187425 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1209 04:26:53.895030 1187425 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1209 04:26:53.902229 1187425 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1209 04:26:53.902301 1187425 kubeadm.go:602] duration metric: took 16.713333ms to restartPrimaryControlPlane
	I1209 04:26:53.902316 1187425 kubeadm.go:403] duration metric: took 56.036306ms to StartCluster
	I1209 04:26:53.902333 1187425 settings.go:142] acquiring lock: {Name:mk8fa744e3d74bf8a1cbf5ac275c9f1969ad91a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 04:26:53.902398 1187425 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22081-1142328/kubeconfig
	I1209 04:26:53.902993 1187425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1142328/kubeconfig: {Name:mk0dc127429f88dc4fdfb2d110deebc58207c1b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 04:26:53.903190 1187425 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1209 04:26:53.903521 1187425 config.go:182] Loaded profile config "functional-667319": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1209 04:26:53.903568 1187425 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1209 04:26:53.903630 1187425 addons.go:70] Setting storage-provisioner=true in profile "functional-667319"
	I1209 04:26:53.903643 1187425 addons.go:239] Setting addon storage-provisioner=true in "functional-667319"
	I1209 04:26:53.903675 1187425 host.go:66] Checking if "functional-667319" exists ...
	I1209 04:26:53.904120 1187425 addons.go:70] Setting default-storageclass=true in profile "functional-667319"
	I1209 04:26:53.904144 1187425 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-667319"
	I1209 04:26:53.904441 1187425 cli_runner.go:164] Run: docker container inspect functional-667319 --format={{.State.Status}}
	I1209 04:26:53.904640 1187425 cli_runner.go:164] Run: docker container inspect functional-667319 --format={{.State.Status}}
	I1209 04:26:53.910201 1187425 out.go:179] * Verifying Kubernetes components...
	I1209 04:26:53.913884 1187425 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 04:26:53.930099 1187425 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 04:26:53.932721 1187425 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22081-1142328/kubeconfig
	I1209 04:26:53.932880 1187425 kapi.go:59] client config for functional-667319: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/client.crt", KeyFile:"/home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/client.key", CAFile:"/home/jenkins/minikube-integration/22081-1142328/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb3ec0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1209 04:26:53.933092 1187425 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:26:53.933105 1187425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1209 04:26:53.933155 1187425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-667319
	I1209 04:26:53.933672 1187425 addons.go:239] Setting addon default-storageclass=true in "functional-667319"
	I1209 04:26:53.933726 1187425 host.go:66] Checking if "functional-667319" exists ...
	I1209 04:26:53.934157 1187425 cli_runner.go:164] Run: docker container inspect functional-667319 --format={{.State.Status}}
	I1209 04:26:53.980209 1187425 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/functional-667319/id_rsa Username:docker}
	I1209 04:26:53.991515 1187425 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1209 04:26:53.991543 1187425 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1209 04:26:53.991606 1187425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-667319
	I1209 04:26:54.014988 1187425 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/functional-667319/id_rsa Username:docker}
	I1209 04:26:54.109673 1187425 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 04:26:54.172299 1187425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1209 04:26:54.172446 1187425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:26:54.932432 1187425 node_ready.go:35] waiting up to 6m0s for node "functional-667319" to be "Ready" ...
	I1209 04:26:54.932477 1187425 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:26:54.932512 1187425 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:26:54.932537 1187425 retry.go:31] will retry after 239.582285ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:26:54.932571 1187425 type.go:168] "Request Body" body=""
	I1209 04:26:54.932584 1187425 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:26:54.932596 1187425 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:26:54.932603 1187425 retry.go:31] will retry after 326.615849ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:26:54.932629 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:26:54.932908 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:26:55.173322 1187425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:26:55.233582 1187425 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:26:55.233631 1187425 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:26:55.233651 1187425 retry.go:31] will retry after 246.357107ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:26:55.259785 1187425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 04:26:55.318382 1187425 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:26:55.318469 1187425 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:26:55.318493 1187425 retry.go:31] will retry after 410.345383ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:26:55.433607 1187425 type.go:168] "Request Body" body=""
	I1209 04:26:55.433683 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:26:55.434019 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:26:55.480272 1187425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:26:55.539370 1187425 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:26:55.543073 1187425 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:26:55.543104 1187425 retry.go:31] will retry after 836.674318ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:26:55.729246 1187425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 04:26:55.790859 1187425 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:26:55.790906 1187425 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:26:55.790952 1187425 retry.go:31] will retry after 634.479833ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:26:55.933159 1187425 type.go:168] "Request Body" body=""
	I1209 04:26:55.933235 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:26:55.933592 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:26:56.380124 1187425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:26:56.425589 1187425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 04:26:56.432912 1187425 type.go:168] "Request Body" body=""
	I1209 04:26:56.433084 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:26:56.433454 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:26:56.462533 1187425 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:26:56.462616 1187425 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:26:56.462643 1187425 retry.go:31] will retry after 603.323732ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:26:56.528272 1187425 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:26:56.528318 1187425 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:26:56.528338 1187425 retry.go:31] will retry after 1.072780189s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:26:56.932753 1187425 type.go:168] "Request Body" body=""
	I1209 04:26:56.932827 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:26:56.933209 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:26:56.933265 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:26:57.066591 1187425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:26:57.132172 1187425 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:26:57.135761 1187425 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:26:57.135793 1187425 retry.go:31] will retry after 1.855495012s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:26:57.433210 1187425 type.go:168] "Request Body" body=""
	I1209 04:26:57.433286 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:26:57.433630 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:26:57.601957 1187425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 04:26:57.657995 1187425 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:26:57.658038 1187425 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:26:57.658057 1187425 retry.go:31] will retry after 1.134842328s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:26:57.933276 1187425 type.go:168] "Request Body" body=""
	I1209 04:26:57.933355 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:26:57.933644 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:26:58.433445 1187425 type.go:168] "Request Body" body=""
	I1209 04:26:58.433533 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:26:58.433853 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:26:58.793130 1187425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 04:26:58.858674 1187425 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:26:58.858714 1187425 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:26:58.858733 1187425 retry.go:31] will retry after 2.746713696s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:26:58.933078 1187425 type.go:168] "Request Body" body=""
	I1209 04:26:58.933157 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:26:58.933497 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:26:58.933557 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:26:58.991692 1187425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:26:59.049214 1187425 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:26:59.052768 1187425 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:26:59.052797 1187425 retry.go:31] will retry after 2.715253433s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:26:59.433202 1187425 type.go:168] "Request Body" body=""
	I1209 04:26:59.433383 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:26:59.433760 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:26:59.932622 1187425 type.go:168] "Request Body" body=""
	I1209 04:26:59.932706 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:26:59.933025 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:00.432716 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:00.432792 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:00.433084 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:00.932666 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:00.932767 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:00.933080 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:01.432721 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:01.432802 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:01.433155 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:27:01.433220 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:27:01.606514 1187425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 04:27:01.664108 1187425 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:27:01.667800 1187425 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:27:01.667831 1187425 retry.go:31] will retry after 3.567848129s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:27:01.769041 1187425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:27:01.828356 1187425 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:27:01.831855 1187425 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:27:01.831890 1187425 retry.go:31] will retry after 1.487712174s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:27:01.933283 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:01.933357 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:01.933696 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:02.433227 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:02.433296 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:02.433566 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:02.933365 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:02.933446 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:02.933784 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:03.320437 1187425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:27:03.380650 1187425 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:27:03.380689 1187425 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:27:03.380707 1187425 retry.go:31] will retry after 2.980491619s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:27:03.432967 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:03.433052 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:03.433335 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:27:03.433382 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:27:03.933173 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:03.933261 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:03.933564 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:04.433334 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:04.433407 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:04.433774 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:04.932608 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:04.932706 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:04.932991 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:05.236581 1187425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 04:27:05.294920 1187425 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:27:05.298256 1187425 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:27:05.298287 1187425 retry.go:31] will retry after 3.775902085s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:27:05.433544 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:05.433623 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:05.433911 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:27:05.433968 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:27:05.932633 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:05.932732 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:05.933097 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:06.361776 1187425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:27:06.423571 1187425 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:27:06.423609 1187425 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:27:06.423628 1187425 retry.go:31] will retry after 5.55631863s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:27:06.432687 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:06.432759 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:06.433064 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:06.932763 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:06.932858 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:06.933188 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:07.432720 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:07.432798 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:07.433122 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:07.932712 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:07.932788 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:07.933143 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:27:07.933270 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:27:08.432753 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:08.432826 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:08.433121 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:08.932708 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:08.932789 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:08.933114 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:09.074480 1187425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 04:27:09.131213 1187425 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:27:09.134642 1187425 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:27:09.134677 1187425 retry.go:31] will retry after 3.336397846s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:27:09.433063 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:09.433136 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:09.433477 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:09.933147 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:09.933243 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:09.933515 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:27:09.933565 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:27:10.433463 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:10.433543 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:10.433860 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:10.933720 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:10.933792 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:10.934110 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:11.432758 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:11.432831 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:11.433103 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:11.932702 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:11.932775 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:11.933127 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:11.980489 1187425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:27:12.042917 1187425 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:27:12.047245 1187425 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:27:12.047276 1187425 retry.go:31] will retry after 4.846358398s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:27:12.432667 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:12.432737 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:12.433027 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:27:12.433074 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:27:12.471387 1187425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 04:27:12.533451 1187425 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:27:12.533488 1187425 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:27:12.533508 1187425 retry.go:31] will retry after 12.396608004s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:27:12.932956 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:12.933031 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:12.933353 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:13.432721 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:13.432794 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:13.433126 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:13.932935 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:13.933007 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:13.933342 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:14.432734 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:14.432802 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:14.433056 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:27:14.433098 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:27:14.932653 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:14.932768 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:14.933061 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:15.432698 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:15.432796 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:15.433182 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:15.932668 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:15.932746 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:15.933050 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:16.432712 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:16.432788 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:16.433123 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:27:16.433176 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:27:16.894794 1187425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:27:16.933270 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:16.933350 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:16.933633 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:16.956237 1187425 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:27:16.956277 1187425 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:27:16.956299 1187425 retry.go:31] will retry after 11.708634593s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:27:17.432723 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:17.432798 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:17.433065 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:17.932740 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:17.932815 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:17.933136 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:18.432860 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:18.432932 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:18.433214 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:27:18.433267 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:27:18.932660 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:18.932728 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:18.933009 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:19.432696 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:19.432772 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:19.433147 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:19.932674 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:19.932750 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:19.933101 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:20.432907 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:20.432984 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:20.433236 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:20.932684 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:20.932760 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:20.933100 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:27:20.933152 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:27:21.432797 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:21.432871 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:21.433197 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:21.932637 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:21.932726 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:21.932993 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:22.432707 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:22.432782 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:22.433117 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:22.932841 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:22.932917 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:22.933234 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:27:22.933291 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:27:23.432668 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:23.432751 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:23.433027 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:23.932873 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:23.932948 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:23.933315 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:24.432680 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:24.432753 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:24.433071 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:24.930697 1187425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 04:27:24.933014 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:24.933088 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:24.933320 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:27:24.933369 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:27:25.005568 1187425 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:27:25.005627 1187425 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:27:25.005648 1187425 retry.go:31] will retry after 8.82909482s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:27:25.433152 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:25.433233 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:25.433532 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:25.932972 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:25.933044 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:25.933358 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:26.432756 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:26.432830 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:26.433108 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:26.932726 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:26.932803 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:26.933099 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:27.432693 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:27.432765 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:27.433082 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:27:27.433136 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:27:27.932636 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:27.932712 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:27.933033 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:28.432693 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:28.432767 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:28.433092 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:28.665515 1187425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:27:28.738878 1187425 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:27:28.745399 1187425 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:27:28.745439 1187425 retry.go:31] will retry after 17.60519501s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:27:28.932773 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:28.932863 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:28.933172 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:29.432651 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:29.432722 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:29.432984 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:29.932660 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:29.932732 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:29.933044 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:27:29.933094 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:27:30.432735 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:30.432809 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:30.433166 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:30.932654 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:30.932753 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:30.933041 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:31.432690 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:31.432771 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:31.433110 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:31.932741 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:31.932815 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:31.933152 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:27:31.933206 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:27:32.432841 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:32.432914 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:32.433177 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:32.932689 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:32.932763 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:32.933056 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:33.432759 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:33.432858 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:33.433217 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:33.835821 1187425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 04:27:33.901341 1187425 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:27:33.901393 1187425 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:27:33.901417 1187425 retry.go:31] will retry after 15.074885047s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:27:33.933523 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:33.933593 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:33.933865 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:27:33.933909 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:27:34.433650 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:34.433727 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:34.434057 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:34.933020 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:34.933101 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:34.933420 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:35.433095 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:35.433165 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:35.433445 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:35.933243 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:35.933325 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:35.933633 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:36.433407 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:36.433483 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:36.433826 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:27:36.433882 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:27:36.933227 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:36.933299 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:36.933563 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:37.433288 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:37.433419 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:37.433790 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:37.933592 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:37.933667 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:37.934021 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:38.432659 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:38.432729 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:38.433014 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:38.932721 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:38.932798 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:38.933137 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:27:38.933190 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:27:39.432858 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:39.432933 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:39.433235 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:39.932589 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:39.932669 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:39.932951 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:40.432701 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:40.432786 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:40.433116 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:40.932716 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:40.932797 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:40.933091 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:41.432779 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:41.432846 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:41.433142 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:27:41.433204 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:27:41.932681 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:41.932757 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:41.933101 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:42.432844 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:42.432919 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:42.433290 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:42.932967 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:42.933038 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:42.933352 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:43.432697 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:43.432812 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:43.433136 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:43.933033 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:43.933129 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:43.933472 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:27:43.933526 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:27:44.433250 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:44.433328 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:44.433660 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:44.933653 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:44.933724 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:44.934068 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:45.432640 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:45.432721 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:45.433020 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:45.932669 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:45.932752 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:45.933159 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:46.350898 1187425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:27:46.406595 1187425 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:27:46.409949 1187425 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:27:46.409981 1187425 retry.go:31] will retry after 30.377142014s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:27:46.433127 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:46.433197 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:46.433514 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:27:46.433571 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:27:46.933101 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:46.933177 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:46.933501 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:47.433170 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:47.433241 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:47.433507 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:47.932770 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:47.932843 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:47.933174 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:48.432886 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:48.432966 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:48.433255 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:48.932660 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:48.932727 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:48.933049 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:27:48.933100 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:27:48.977251 1187425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 04:27:49.036457 1187425 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:27:49.036497 1187425 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:27:49.036517 1187425 retry.go:31] will retry after 20.293703248s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:27:49.432687 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:49.432763 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:49.433127 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:49.932861 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:49.932933 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:49.933269 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:50.433588 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:50.433662 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:50.433924 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:50.932670 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:50.932744 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:50.933080 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:27:50.933141 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:27:51.432801 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:51.432888 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:51.433180 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:51.932877 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:51.932959 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:51.933270 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:52.432704 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:52.432782 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:52.433138 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:52.932700 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:52.932780 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:52.933082 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:53.432653 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:53.432725 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:53.433037 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:27:53.433089 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:27:53.932975 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:53.933048 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:53.933385 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:54.432710 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:54.432790 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:54.433145 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:54.932877 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:54.932952 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:54.933240 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:55.432707 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:55.432782 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:55.433125 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:27:55.433191 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:27:55.932861 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:55.932943 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:55.933270 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:56.432680 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:56.432756 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:56.433029 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:56.932719 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:56.932801 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:56.933134 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:57.432697 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:57.432773 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:57.433096 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:57.932659 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:57.932729 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:57.933033 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:27:57.933082 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:27:58.432760 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:58.432832 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:58.433186 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:58.932893 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:58.932974 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:58.933286 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:59.432662 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:59.432732 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:59.433040 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:59.932639 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:59.932712 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:59.933039 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:00.432720 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:00.432811 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:00.433208 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:28:00.433277 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:28:00.932672 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:00.932741 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:00.933005 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:01.432725 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:01.432807 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:01.433146 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:01.932896 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:01.932975 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:01.933314 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:02.432655 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:02.432728 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:02.433016 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:02.932700 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:02.932781 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:02.933135 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:28:02.933190 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:28:03.432857 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:03.432934 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:03.433286 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:03.933001 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:03.933068 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:03.933321 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:04.432726 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:04.432801 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:04.433094 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:04.932617 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:04.932698 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:04.933036 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:05.432707 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:05.432788 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:05.433060 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:28:05.433107 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:28:05.932743 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:05.932818 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:05.933156 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:06.432696 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:06.432777 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:06.433116 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:06.933451 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:06.933527 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:06.933789 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:07.433539 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:07.433615 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:07.433955 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:28:07.434011 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:28:07.933609 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:07.933684 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:07.934024 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:08.432650 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:08.432722 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:08.433067 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:08.932695 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:08.932767 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:08.933107 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:09.330698 1187425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 04:28:09.392626 1187425 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:28:09.392671 1187425 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:28:09.392765 1187425 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1209 04:28:09.432874 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:09.432952 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:09.433232 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:09.932653 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:09.932723 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:09.932991 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:28:09.933037 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:28:10.432663 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:10.432757 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:10.433041 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:10.932700 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:10.932793 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:10.933076 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:11.433216 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:11.433303 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:11.433575 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:11.933330 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:11.933412 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:11.933748 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:28:11.933801 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:28:12.433587 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:12.433670 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:12.434027 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:12.932705 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:12.932772 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:12.933018 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:13.432706 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:13.432787 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:13.433119 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:13.932999 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:13.933099 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:13.933392 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:14.432657 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:14.432736 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:14.433055 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:28:14.433109 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:28:14.932664 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:14.932748 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:14.933036 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:15.432674 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:15.432750 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:15.433098 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:15.932771 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:15.932842 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:15.933137 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:16.432714 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:16.432788 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:16.433087 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:28:16.433135 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:28:16.787371 1187425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:28:16.844461 1187425 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:28:16.844502 1187425 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:28:16.844590 1187425 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1209 04:28:16.849226 1187425 out.go:179] * Enabled addons: 
	I1209 04:28:16.852870 1187425 addons.go:530] duration metric: took 1m22.949297316s for enable addons: enabled=[]
	I1209 04:28:16.932633 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:16.932724 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:16.933045 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:17.432667 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:17.432732 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:17.433031 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:17.932701 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:17.932788 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:17.933067 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:18.432770 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:18.432843 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:18.433126 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:28:18.433178 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:28:18.932677 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:18.932752 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:18.932995 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:19.432687 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:19.432781 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:19.433100 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:19.932854 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:19.932926 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:19.933256 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:20.433039 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:20.433107 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:20.433386 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:28:20.433429 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:28:20.933212 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:20.933282 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:20.933581 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:21.433349 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:21.433421 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:21.433766 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:21.933219 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:21.933285 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:21.933576 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:22.433203 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:22.433273 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:22.433621 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:28:22.433676 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:28:22.933451 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:22.933536 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:22.933840 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:23.433217 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:23.433287 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:23.433546 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:23.933612 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:23.933689 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:23.934050 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:24.432755 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:24.432836 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:24.433161 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:24.932932 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:24.933008 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:24.933276 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:28:24.933327 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:28:25.432973 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:25.433049 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:25.433379 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:25.933099 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:25.933181 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:25.933530 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:26.433216 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:26.433283 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:26.433547 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:26.933320 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:26.933401 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:26.933762 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:28:26.933818 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:28:27.433589 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:27.433667 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:27.434004 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:27.932649 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:27.932724 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:27.933001 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:28.432685 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:28.432757 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:28.433490 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:28.933280 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:28.933359 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:28.933693 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:29.433205 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:29.433272 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:29.433545 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:28:29.433592 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:28:29.933575 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:29.933655 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:29.933979 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:30.432667 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:30.432747 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:30.433044 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:30.932681 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:30.932752 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:30.933046 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:31.432688 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:31.432771 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:31.433098 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:31.932806 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:31.932880 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:31.933203 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:28:31.933259 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:28:32.432774 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:32.432849 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:32.433097 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:32.932695 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:32.932765 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:32.933078 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:33.432694 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:33.432776 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:33.433090 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:33.932980 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:33.933051 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:33.933310 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:28:33.933359 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:28:34.432667 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:34.432745 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:34.433055 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:34.932949 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:34.933032 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:34.933356 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:35.433019 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:35.433096 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:35.433526 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:35.933390 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:35.933466 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:35.933812 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:28:35.933870 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:28:36.433595 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:36.433676 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:36.433996 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:36.932657 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:36.932727 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:36.933025 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:37.432697 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:37.432776 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:37.433068 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:37.932702 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:37.932780 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:37.933143 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:38.432646 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:38.432727 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:38.433055 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:28:38.433106 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:28:38.932743 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:38.932816 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:38.933130 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:39.432847 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:39.432919 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:39.433263 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:39.933046 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:39.933114 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:39.933379 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:40.432710 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:40.432783 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:40.433129 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:28:40.433184 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:28:40.932927 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:40.933008 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:40.933371 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:41.432689 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:41.432758 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:41.433014 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:41.932710 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:41.932795 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:41.933094 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:42.432689 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:42.432808 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:42.433149 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:28:42.433204 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:28:42.932862 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:42.932928 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:42.933226 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:43.432918 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:43.432995 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:43.433361 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:43.933127 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:43.933204 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:43.933534 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:44.433220 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:44.433305 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:44.433609 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:28:44.433661 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:28:44.933573 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:44.933652 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:44.933989 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:45.432671 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:45.432808 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:45.433150 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:45.932712 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:45.932784 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:45.933049 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:46.432736 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:46.432815 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:46.433149 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:46.932701 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:46.932779 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:46.933073 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:28:46.933121 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:28:47.432739 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:47.432826 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:47.433130 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:47.932695 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:47.932765 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:47.933076 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:48.432672 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:48.432745 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:48.433062 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:48.932639 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:48.932746 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:48.933042 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:49.432707 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:49.432782 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:49.433123 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:28:49.433177 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:28:49.932922 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:49.932995 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:49.933579 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:50.433185 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:50.433253 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:50.433517 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:50.933391 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:50.933468 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:50.933797 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:51.433551 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:51.433624 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:51.433934 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:28:51.433990 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:28:51.933180 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:51.933283 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:51.933542 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:52.433358 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:52.433437 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:52.433756 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:52.933478 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:52.933559 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:52.933900 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:53.433153 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:53.433229 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:53.433491 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:53.932707 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:53.932895 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:53.933271 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:28:53.933325 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:28:54.432706 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:54.432787 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:54.433082 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:54.933635 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:54.933745 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:54.934087 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:55.432697 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:55.432773 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:55.433110 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:55.932879 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:55.932954 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:55.933290 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:28:55.933359 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:28:56.432868 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:56.432941 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:56.433305 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:56.932702 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:56.932781 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:56.933131 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:57.432846 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:57.432925 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:57.433270 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:57.932659 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:57.932734 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:57.933027 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:58.432714 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:58.432792 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:58.433128 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:28:58.433197 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:28:58.932868 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:58.932944 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:58.933265 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:59.432668 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:59.432735 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:59.432989 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:59.932616 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:59.932707 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:59.933033 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:00.432720 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:00.432802 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:00.433159 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:29:00.433228 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:29:00.932660 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:00.932731 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:00.933053 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:01.432716 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:01.432794 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:01.433137 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:01.932702 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:01.932776 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:01.933098 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:02.432775 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:02.432843 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:02.433108 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:02.932791 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:02.932873 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:02.933214 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:29:02.933284 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:29:03.432715 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:03.432795 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:03.433113 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:03.933003 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:03.933076 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:03.933364 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:04.432671 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:04.432749 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:04.433066 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:04.932620 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:04.932694 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:04.933013 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:05.432723 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:05.432790 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:05.433120 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:29:05.433184 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:29:05.932842 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:05.932925 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:05.933228 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:06.432721 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:06.432798 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:06.433119 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:06.932673 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:06.932758 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:06.933065 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:07.432690 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:07.432761 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:07.433037 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:07.932687 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:07.932769 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:07.933108 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:29:07.933164 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:29:08.432792 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:08.432858 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:08.433117 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:08.932787 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:08.932863 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:08.933157 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:09.432693 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:09.432764 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:09.433055 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:09.932595 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:09.932672 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:09.932942 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:10.432649 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:10.432719 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:10.433035 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:29:10.433090 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:29:10.932796 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:10.932869 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:10.933200 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:11.432701 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:11.432802 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:11.433137 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:11.932772 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:11.932869 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:11.933219 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:12.432724 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:12.432804 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:12.433120 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:29:12.433175 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:29:12.932648 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:12.932732 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:12.933021 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:13.432608 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:13.432696 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:13.432999 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:13.932923 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:13.932996 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:13.933301 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:14.433005 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:14.433076 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:14.433349 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:29:14.433392 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:29:14.933312 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:14.933390 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:14.933705 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:15.433476 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:15.433554 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:15.433865 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:15.933207 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:15.933288 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:15.933572 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:16.433400 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:16.433476 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:16.433794 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:29:16.433849 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:29:16.933242 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:16.933322 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:16.933648 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:17.433213 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:17.433292 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:17.433548 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:17.933339 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:17.933416 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:17.933707 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:18.433434 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:18.433516 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:18.433853 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:29:18.433907 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:29:18.933184 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:18.933260 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:18.933504 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:19.433298 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:19.433371 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:19.433705 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:19.933618 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:19.933716 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:19.934086 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:20.432647 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:20.432722 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:20.433052 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:20.932730 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:20.932801 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:20.933102 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:29:20.933155 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:29:21.432687 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:21.432769 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:21.433095 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:21.932654 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:21.932755 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:21.933080 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:22.432734 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:22.432823 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:22.433185 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:22.932923 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:22.933014 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:22.933448 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:29:22.933504 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:29:23.433275 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:23.433350 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:23.433652 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:23.933627 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:23.933712 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:23.934033 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:24.432724 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:24.432802 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:24.433135 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:24.932905 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:24.932975 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:24.933297 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:25.432701 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:25.432775 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:25.433100 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:29:25.433159 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:29:25.932854 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:25.932931 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:25.933286 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:26.432982 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:26.433053 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:26.433514 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:26.933295 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:26.933368 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:26.933684 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:27.433488 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:27.433566 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:27.433940 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:29:27.434009 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:29:27.932662 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:27.932734 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:27.933007 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:28.432710 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:28.432783 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:28.433097 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:28.932665 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:28.932744 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:28.933074 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:29.432741 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:29.432816 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:29.433060 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:29.932619 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:29.932701 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:29.933015 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:29:29.933073 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:29:30.432700 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:30.432780 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:30.433106 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:30.932656 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:30.932728 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:30.932982 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:31.432616 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:31.432689 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:31.433009 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:31.932733 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:31.932812 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:31.933149 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:29:31.933201 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:29:32.432841 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:32.432914 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:32.433166 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:32.932705 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:32.932783 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:32.933123 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:33.432882 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:33.432957 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:33.433297 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:33.933053 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:33.933130 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:33.933467 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:29:33.933520 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:29:34.433286 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:34.433403 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:34.433746 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:34.932612 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:34.932683 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:34.933012 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:35.433252 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:35.433331 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:35.433606 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:35.933376 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:35.933452 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:35.933778 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:29:35.933826 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:29:36.433423 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:36.433498 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:36.433798 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:36.933229 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:36.933302 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:36.933556 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:37.433365 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:37.433445 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:37.433756 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:37.933531 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:37.933605 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:37.933936 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:29:37.933989 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:29:38.433192 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:38.433264 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:38.433514 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:38.933271 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:38.933344 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:38.933634 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:39.433290 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:39.433366 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:39.433709 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:39.933513 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:39.933582 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:39.933833 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:40.433616 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:40.433692 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:40.433987 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:29:40.434034 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:29:40.933257 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:40.933329 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:40.933667 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:41.433181 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:41.433267 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:41.433577 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:41.933367 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:41.933449 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:41.933797 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:42.433613 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:42.433687 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:42.434049 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:29:42.434129 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:29:42.932643 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:42.932715 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:42.932992 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:43.432683 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:43.432763 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:43.433076 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:43.933013 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:43.933091 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:43.933427 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:44.433227 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:44.433298 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:44.433550 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:44.933559 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:44.933633 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:44.933965 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:29:44.934020 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:29:45.432683 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:45.432761 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:45.433112 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:45.932794 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:45.932869 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:45.933154 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:46.432857 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:46.432932 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:46.433290 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:46.932721 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:46.932795 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:46.933140 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:47.432683 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:47.432767 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:47.433040 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:29:47.433081 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:29:47.932708 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:47.932784 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:47.933102 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:48.432814 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:48.432891 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:48.433180 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:48.932663 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:48.932737 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:48.932981 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:49.432666 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:49.432752 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:49.433069 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:29:49.433125 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:29:49.933001 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:49.933077 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:49.933427 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:50.433227 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:50.433297 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:50.433549 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:50.933296 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:50.933377 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:50.933694 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:51.433469 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:51.433544 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:51.433881 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:29:51.433935 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:29:51.933202 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:51.933269 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:51.933538 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:52.433290 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:52.433364 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:52.433675 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:52.933484 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:52.933559 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:52.933885 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:53.433239 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:53.433315 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:53.433579 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:53.933658 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:53.933737 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:53.934052 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:29:53.934108 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:29:54.432705 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:54.432789 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:54.433124 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:54.932891 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:54.932964 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:54.933236 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:55.432891 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:55.432964 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:55.433304 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:55.932875 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:55.932972 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:55.933352 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:56.433012 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:56.433094 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:56.433401 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:29:56.433443 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:29:56.932693 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:56.932777 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:56.933108 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:57.432826 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:57.432902 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:57.433221 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:57.932682 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:57.932755 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:57.933028 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:58.432718 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:58.432793 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:58.433140 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:58.932716 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:58.932793 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:58.933105 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:29:58.933168 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:29:59.432822 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:59.432892 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:59.433194 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:59.933078 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:59.933150 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:59.933487 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:00.435081 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:00.435162 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:00.435476 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:00.933357 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:00.933452 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:00.933844 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:30:00.933904 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:30:01.433579 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:01.433688 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:01.434089 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:01.932814 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:01.932889 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:01.933149 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:02.432696 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:02.432770 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:02.433104 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:02.932820 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:02.932900 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:02.933272 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:03.432924 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:03.433018 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:03.433394 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:30:03.433446 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:30:03.933058 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:03.933138 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:03.933450 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:04.433250 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:04.433365 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:04.433699 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:04.933501 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:04.933567 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:04.933823 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:05.433624 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:05.433703 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:05.434043 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:30:05.434098 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:30:05.932702 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:05.932780 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:05.933132 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:06.432812 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:06.432885 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:06.433207 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:06.932725 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:06.932803 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:06.933164 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:07.432877 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:07.432965 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:07.433368 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:07.932664 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:07.932739 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:07.933045 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:30:07.933095 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:30:08.432746 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:08.432832 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:08.433233 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:08.932787 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:08.932862 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:08.933220 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:09.432722 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:09.432792 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:09.433075 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:09.932940 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:09.933018 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:09.933383 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:30:09.933446 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:30:10.432697 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:10.432775 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:10.433080 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:10.932767 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:10.932837 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:10.933122 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:11.432711 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:11.432785 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:11.433139 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:11.932864 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:11.932943 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:11.933292 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:12.432972 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:12.433050 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:12.433319 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:30:12.433362 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:30:12.932693 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:12.932770 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:12.933130 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:13.432817 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:13.432891 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:13.433211 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:13.932954 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:13.933023 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:13.933298 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:14.432961 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:14.433039 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:14.433383 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:30:14.433439 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:30:14.933212 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:14.933286 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:14.933615 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:15.433214 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:15.433283 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:15.433537 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:15.933372 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:15.933448 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:15.933750 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:16.433525 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:16.433604 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:16.433977 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:30:16.434106 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:30:16.932772 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:16.932839 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:16.933100 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:17.432712 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:17.432793 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:17.433089 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:17.932769 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:17.932849 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:17.933173 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:18.432930 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:18.432998 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:18.433257 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:18.932950 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:18.933025 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:18.933372 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:30:18.933434 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:30:19.432919 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:19.433009 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:19.433344 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:19.933155 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:19.933227 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:19.933491 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:20.433362 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:20.433448 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:20.433795 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:20.933260 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:20.933344 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:20.933670 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:30:20.933726 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:30:21.433173 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:21.433246 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:21.433511 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:21.933299 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:21.933379 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:21.933716 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:22.433492 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:22.433570 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:22.433867 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:22.933284 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:22.933366 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:22.933654 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:23.433368 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:23.433438 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:23.433760 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:30:23.433812 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:30:23.933593 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:23.933675 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:23.933994 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:24.432661 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:24.432729 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:24.432981 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:24.932865 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:24.932938 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:24.933361 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:25.432685 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:25.432763 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:25.433120 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:25.932767 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:25.932839 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:25.933140 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:30:25.933197 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:30:26.432718 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:26.432796 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:26.433197 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:26.932889 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:26.932990 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:26.933317 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:27.432657 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:27.432727 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:27.433032 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:27.932724 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:27.932798 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:27.933135 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:28.432696 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:28.432772 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:28.433073 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:30:28.433121 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:30:28.932660 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:28.932735 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:28.933045 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:29.432714 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:29.432786 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:29.433158 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:29.932918 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:29.933000 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:29.933354 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:30.432667 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:30.432740 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:30.433039 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:30.932757 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:30.932838 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:30.933183 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:30:30.933239 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:30:31.432899 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:31.432979 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:31.433354 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:31.933050 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:31.933119 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:31.933461 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:32.433235 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:32.433315 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:32.433644 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:32.933442 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:32.933524 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:32.933825 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:30:32.933872 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:30:33.433233 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:33.433304 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:33.433591 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:33.933547 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:33.933627 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:33.933938 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:34.432679 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:34.432763 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:34.433108 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:34.933586 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:34.933660 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:34.933905 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:30:34.933945 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:30:35.432656 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:35.432733 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:35.433079 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:35.932801 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:35.932887 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:35.933268 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:36.432736 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:36.432805 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:36.433059 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:36.932731 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:36.932806 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:36.933156 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:37.432867 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:37.432942 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:37.433311 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:30:37.433368 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:30:37.932650 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:37.932720 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:37.932998 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:38.432684 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:38.432763 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:38.433055 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:38.932741 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:38.932818 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:38.933136 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:39.432679 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:39.432748 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:39.433040 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:39.932790 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:39.932869 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:39.933219 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:30:39.933279 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:30:40.432703 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:40.432777 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:40.433111 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:40.932641 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:40.932707 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:40.932957 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:41.432666 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:41.432744 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:41.433069 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:41.932847 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:41.932929 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:41.933224 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:42.432889 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:42.432958 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:42.433265 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:30:42.433309 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:30:42.932708 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:42.932789 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:42.933126 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:43.432820 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:43.432902 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:43.433230 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:43.933144 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:43.933213 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:43.933465 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:44.433223 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:44.433300 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:44.433652 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:30:44.433704 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:30:44.933589 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:44.933670 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:44.934005 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:45.432690 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:45.432762 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:45.433007 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:45.932747 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:45.932822 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:45.933163 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:46.432880 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:46.432953 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:46.433265 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:46.932662 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:46.932736 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:46.933048 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:30:46.933099 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:30:47.432707 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:47.432797 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:47.433190 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:47.932887 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:47.932971 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:47.933316 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:48.432720 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:48.432792 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:48.433100 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:48.932688 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:48.932768 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:48.933088 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:30:48.933148 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:30:49.432733 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:49.432809 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:49.433125 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:49.933000 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:49.933071 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:49.933338 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:50.433013 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:50.433086 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:50.433573 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:50.933345 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:50.933421 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:50.933709 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:30:50.933750 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:30:51.433232 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:51.433307 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:51.433630 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:51.933396 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:51.933477 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:51.933822 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:52.433445 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:52.433526 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:52.433848 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:52.933226 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:52.933298 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:52.933562 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:53.433320 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:53.433394 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:53.433724 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:30:53.433778 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:30:53.932930 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:53.933016 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:53.933473 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:54.433004 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:54.433155 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:54.433480 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:54.933346 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:54.933427 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:54.933751 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:55.433491 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:55.433571 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:55.433940 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:30:55.434008 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:30:55.933242 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:55.933327 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:55.933662 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:56.433447 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:56.433527 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:56.433865 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:56.933651 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:56.933744 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:56.934082 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:57.432792 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:57.432864 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:57.433162 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:57.932683 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:57.932753 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:57.933114 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:30:57.933173 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:30:58.432860 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:58.432937 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:58.433264 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:58.932676 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:58.932748 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:58.932997 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:59.432729 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:59.432808 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:59.433150 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:59.933081 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:59.933159 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:59.933480 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:30:59.933530 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:31:00.433233 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:00.433315 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:00.433580 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:00.933316 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:00.933394 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:00.933727 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:01.433533 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:01.433611 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:01.433948 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:01.933228 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:01.933301 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:01.933558 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:31:01.933611 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:31:02.433377 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:02.433451 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:02.433800 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:02.933601 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:02.933680 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:02.933967 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:03.432648 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:03.432726 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:03.432986 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:03.932952 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:03.933038 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:03.933395 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:04.433141 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:04.433218 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:04.433526 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:31:04.433581 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:31:04.933489 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:04.933558 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:04.933807 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:05.433605 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:05.433678 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:05.434011 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:05.932712 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:05.932791 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:05.933127 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:06.432820 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:06.432900 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:06.433220 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:06.932716 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:06.932796 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:06.933135 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:31:06.933230 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:31:07.432675 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:07.432749 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:07.433059 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:07.932657 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:07.932734 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:07.933058 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:08.432730 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:08.432806 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:08.433103 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:08.932714 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:08.932789 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:08.933128 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:09.432655 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:09.432733 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:09.432994 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:31:09.433050 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:31:09.932911 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:09.932991 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:09.933336 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:10.432707 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:10.432787 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:10.433102 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:10.932665 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:10.932738 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:10.933044 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:11.432740 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:11.432823 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:11.433154 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:31:11.433214 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:31:11.932885 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:11.932968 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:11.933325 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:12.432666 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:12.432738 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:12.433048 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:12.932727 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:12.932804 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:12.933136 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:13.432857 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:13.432936 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:13.433268 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:31:13.433318 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:31:13.933237 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:13.933317 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:13.933599 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:14.433349 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:14.433424 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:14.433772 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:14.933659 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:14.933736 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:14.934065 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:15.432670 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:15.432747 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:15.433015 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:15.932715 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:15.932801 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:15.933127 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:31:15.933183 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:31:16.432689 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:16.432770 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:16.433108 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:16.932805 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:16.932881 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:16.933165 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:17.432843 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:17.432921 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:17.433248 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:17.932974 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:17.933055 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:17.933357 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:31:17.933406 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:31:18.432810 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:18.432881 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:18.433142 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:18.932706 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:18.932778 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:18.933130 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:19.432699 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:19.432777 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:19.433122 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:19.932861 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:19.932936 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:19.933225 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:20.432912 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:20.432990 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:20.433312 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:31:20.433361 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:31:20.933001 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:20.933084 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:20.933413 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:21.432656 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:21.432769 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:21.433112 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:21.932708 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:21.932788 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:21.933119 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:22.432682 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:22.432760 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:22.433128 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:22.932684 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:22.932752 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:22.932998 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:31:22.933038 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:31:23.432679 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:23.432761 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:23.433116 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:23.932894 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:23.932973 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:23.933311 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:24.432655 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:24.432728 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:24.432998 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:24.932905 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:24.932983 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:24.933347 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:31:24.933403 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:31:25.432684 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:25.432758 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:25.433091 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:25.932662 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:25.932737 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:25.933053 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:26.432697 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:26.432782 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:26.433111 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:26.932702 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:26.932782 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:26.933089 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:27.432642 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:27.432721 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:27.432985 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:31:27.433025 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:31:27.932736 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:27.932813 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:27.933163 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:28.432736 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:28.432812 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:28.433107 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:28.932662 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:28.932730 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:28.933022 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:29.432735 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:29.432813 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:29.433101 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:31:29.433149 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:31:29.932650 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:29.932724 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:29.933059 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:30.432740 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:30.432807 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:30.433108 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:30.932710 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:30.932784 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:30.933148 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:31.432857 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:31.432937 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:31.433271 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:31:31.433325 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:31:31.932657 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:31.932730 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:31.933052 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:32.432705 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:32.432797 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:32.433154 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:32.932867 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:32.932945 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:32.933298 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:33.433002 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:33.433120 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:33.433453 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:31:33.433504 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:31:33.933313 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:33.933388 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:33.933720 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:34.432992 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:34.433115 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:34.433477 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:34.933299 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:34.933372 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:34.933678 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:35.433472 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:35.433550 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:35.433863 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:31:35.433925 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:31:35.932642 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:35.932726 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:35.933082 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:36.432718 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:36.432804 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:36.433204 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:36.932932 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:36.933006 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:36.933324 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:37.432701 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:37.432781 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:37.433098 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:37.932655 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:37.932744 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:37.933007 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:31:37.933067 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:31:38.432684 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:38.432762 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:38.433096 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:38.932741 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:38.932818 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:38.933151 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:39.432752 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:39.432821 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:39.433106 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:39.933656 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:39.933728 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:39.933989 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:31:39.934033 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:31:40.432680 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:40.432765 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:40.433112 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:40.932669 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:40.932734 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:40.933053 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:41.432690 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:41.432780 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:41.433200 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:41.932877 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:41.932953 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:41.933290 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:42.432904 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:42.432982 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:42.433302 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:31:42.433355 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:31:42.932706 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:42.932798 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:42.933087 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:43.432700 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:43.432776 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:43.433120 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:43.933050 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:43.933118 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:43.933424 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:44.432712 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:44.432790 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:44.433076 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:44.932987 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:44.933069 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:44.933451 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:31:44.933508 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:31:45.432652 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:45.432721 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:45.433020 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:45.932767 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:45.932842 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:45.933175 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:46.432686 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:46.432759 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:46.433102 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:46.932655 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:46.932727 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:46.933006 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:47.432684 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:47.432764 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:47.433090 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:31:47.433151 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:31:47.932730 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:47.932804 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:47.933206 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:48.432734 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:48.432808 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:48.433081 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:48.932704 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:48.932788 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:48.933086 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:49.432676 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:49.432758 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:49.433091 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:49.932841 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:49.932922 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:49.933214 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:31:49.933265 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:31:50.432697 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:50.432774 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:50.433083 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:50.932736 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:50.932814 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:50.933144 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:51.432737 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:51.432809 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:51.433098 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:51.932682 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:51.932765 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:51.933115 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:52.432819 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:52.432896 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:52.433241 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:31:52.433300 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:31:52.932671 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:52.932743 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:52.933011 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:53.432692 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:53.432763 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:53.433098 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:53.933090 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:53.933164 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:53.933488 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:54.432669 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:54.432748 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:54.433065 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:54.932936 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:54.933012 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:54.933364 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:31:54.933419 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:31:55.433079 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:55.433151 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:55.433486 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:55.933226 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:55.933296 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:55.933560 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:56.433428 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:56.433505 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:56.433878 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:56.932635 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:56.932709 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:56.933027 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:57.432658 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:57.432736 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:57.433044 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:31:57.433099 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:31:57.932711 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:57.932795 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:57.933103 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:58.432714 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:58.432787 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:58.433121 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:58.932651 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:58.932719 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:58.932975 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:59.432680 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:59.432759 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:59.433101 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:31:59.433157 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:31:59.932861 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:59.932939 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:59.933269 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:00.432699 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:00.432786 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:00.433188 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:00.932718 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:00.932793 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:00.933119 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:01.432702 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:01.432778 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:01.433132 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:32:01.433188 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:32:01.932953 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:01.933059 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:01.933405 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:02.433066 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:02.433138 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:02.433476 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:02.933290 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:02.933362 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:02.933678 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:03.433228 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:03.433307 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:03.433557 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:32:03.433604 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:32:03.933531 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:03.933606 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:03.933926 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:04.432634 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:04.432709 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:04.433045 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:04.932774 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:04.932840 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:04.933129 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:05.432832 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:05.432907 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:05.433248 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:05.932726 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:05.932801 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:05.933145 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:32:05.933201 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:32:06.432676 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:06.432754 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:06.433037 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:06.932744 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:06.932823 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:06.933214 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:07.432896 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:07.432968 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:07.433319 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:07.933026 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:07.933110 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:07.933393 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:32:07.933441 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:32:08.432735 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:08.432817 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:08.433284 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:08.932871 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:08.932978 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:08.933325 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:09.432676 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:09.432743 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:09.432980 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:09.932851 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:09.932929 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:09.933264 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:10.432726 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:10.432817 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:10.433167 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:32:10.433217 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:32:10.932660 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:10.932732 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:10.933027 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:11.432714 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:11.432790 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:11.433154 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:11.932874 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:11.932955 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:11.933284 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:12.432656 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:12.432727 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:12.432974 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:12.932659 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:12.932737 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:12.933062 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:32:12.933115 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:32:13.432673 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:13.432745 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:13.433062 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:13.932942 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:13.933022 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:13.933305 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:14.432666 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:14.432737 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:14.433054 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:14.932628 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:14.932702 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:14.933027 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:15.432739 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:15.432819 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:15.433087 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:32:15.433132 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:32:15.932808 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:15.932886 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:15.933232 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:16.432939 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:16.433082 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:16.433415 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:16.933224 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:16.933297 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:16.933611 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:17.433392 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:17.433466 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:17.433806 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:32:17.433861 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:32:17.933475 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:17.933557 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:17.933868 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:18.433222 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:18.433292 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:18.433591 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:18.933254 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:18.933331 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:18.933670 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:19.433471 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:19.433558 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:19.433901 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:32:19.433958 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:32:19.932590 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:19.932659 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:19.932906 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:20.432666 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:20.432742 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:20.433050 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:20.932683 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:20.932763 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:20.933127 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:21.432653 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:21.432736 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:21.433076 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:21.932743 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:21.932826 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:21.933126 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:32:21.933180 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:32:22.432753 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:22.432822 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:22.433257 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:22.932652 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:22.932719 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:22.933033 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:23.432711 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:23.432784 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:23.433133 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:23.933069 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:23.933144 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:23.933485 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:32:23.933542 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:32:24.433150 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:24.433216 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:24.433549 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:24.933587 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:24.933667 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:24.933983 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:25.432703 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:25.432780 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:25.433147 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:25.932823 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:25.932900 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:25.933226 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:26.432706 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:26.432784 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:26.433129 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:32:26.433184 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:32:26.932670 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:26.932744 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:26.933089 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:27.432659 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:27.432738 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:27.433018 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:27.932725 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:27.932802 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:27.933150 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:28.432726 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:28.432802 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:28.433119 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:28.932673 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:28.932744 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:28.933005 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:32:28.933048 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:32:29.432681 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:29.432758 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:29.433106 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:29.932862 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:29.932935 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:29.933263 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:30.432649 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:30.432724 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:30.433048 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:30.932738 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:30.932809 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:30.933151 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:32:30.933209 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:32:31.432726 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:31.432804 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:31.433153 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:31.932709 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:31.932785 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:31.933093 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:32.432864 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:32.432944 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:32.433316 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:32.932716 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:32.932791 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:32.933128 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:33.432801 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:33.432867 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:33.433124 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:32:33.433164 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:32:33.932966 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:33.933040 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:33.933351 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:34.432696 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:34.432771 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:34.433125 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:34.932830 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:34.932901 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:34.933235 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:35.432934 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:35.433024 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:35.433448 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:32:35.433504 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:32:35.933268 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:35.933342 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:35.933709 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:36.433228 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:36.433294 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:36.433588 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:36.933406 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:36.933485 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:36.933802 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:37.433562 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:37.433642 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:37.433939 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:32:37.433983 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:32:37.933183 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:37.933254 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:37.933510 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:38.433295 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:38.433365 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:38.433691 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:38.933541 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:38.933625 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:38.933982 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:39.432665 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:39.432740 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:39.432999 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:39.932625 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:39.932702 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:39.933037 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:32:39.933088 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:32:40.432600 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:40.432680 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:40.432996 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:40.932646 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:40.932715 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:40.933018 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:41.432713 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:41.432789 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:41.433153 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:41.932729 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:41.932806 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:41.933137 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:32:41.933194 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:32:42.432656 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:42.432729 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:42.433054 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:42.932710 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:42.932792 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:42.933135 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:43.432827 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:43.432907 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:43.433251 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:43.932972 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:43.933046 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:43.933297 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:32:43.933337 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:32:44.433057 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:44.433132 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:44.433467 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:44.933349 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:44.933425 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:44.933760 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:45.433200 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:45.433271 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:45.433522 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:45.933329 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:45.933403 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:45.933719 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:32:45.933777 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:32:46.433543 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:46.433636 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:46.433947 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:46.933240 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:46.933306 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:46.933602 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:47.433389 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:47.433467 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:47.433758 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:47.933580 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:47.933664 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:47.934006 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:32:47.934069 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:32:48.432643 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:48.432717 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:48.432979 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:48.932681 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:48.932755 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:48.933070 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:49.432675 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:49.432745 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:49.433131 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:49.933137 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:49.933206 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:49.933501 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:50.433255 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:50.433323 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:50.433610 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:32:50.433656 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:32:50.933294 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:50.933368 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:50.933657 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:51.433200 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:51.433282 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:51.433542 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:51.933307 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:51.933394 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:51.933715 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:52.433471 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:52.433553 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:52.433873 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:32:52.433936 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:32:52.933235 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:52.933316 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:52.933571 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:53.433369 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:53.433448 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:53.433777 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:53.933615 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:53.933693 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:53.934065 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:54.432659 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:54.432727 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:54.433303 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:54.933397 1187425 type.go:168] "Request Body" body=""
	W1209 04:32:54.933475 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): client rate limiter Wait returned an error: context deadline exceeded
	I1209 04:32:54.933495 1187425 node_ready.go:38] duration metric: took 6m0.001016343s for node "functional-667319" to be "Ready" ...
	I1209 04:32:54.936503 1187425 out.go:203] 
	W1209 04:32:54.939246 1187425 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1209 04:32:54.939264 1187425 out.go:285] * 
	W1209 04:32:54.941401 1187425 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 04:32:54.944197 1187425 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 09 04:26:52 functional-667319 containerd[5187]: time="2025-12-09T04:26:52.429451048Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 09 04:26:52 functional-667319 containerd[5187]: time="2025-12-09T04:26:52.429532046Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 09 04:26:52 functional-667319 containerd[5187]: time="2025-12-09T04:26:52.429642919Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 09 04:26:52 functional-667319 containerd[5187]: time="2025-12-09T04:26:52.429717796Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 09 04:26:52 functional-667319 containerd[5187]: time="2025-12-09T04:26:52.429781302Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 09 04:26:52 functional-667319 containerd[5187]: time="2025-12-09T04:26:52.429840328Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 09 04:26:52 functional-667319 containerd[5187]: time="2025-12-09T04:26:52.429902915Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 09 04:26:52 functional-667319 containerd[5187]: time="2025-12-09T04:26:52.429981665Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 09 04:26:52 functional-667319 containerd[5187]: time="2025-12-09T04:26:52.430052966Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 09 04:26:52 functional-667319 containerd[5187]: time="2025-12-09T04:26:52.430137616Z" level=info msg="Connect containerd service"
	Dec 09 04:26:52 functional-667319 containerd[5187]: time="2025-12-09T04:26:52.430482828Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 09 04:26:52 functional-667319 containerd[5187]: time="2025-12-09T04:26:52.431094871Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 09 04:26:52 functional-667319 containerd[5187]: time="2025-12-09T04:26:52.446888716Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 09 04:26:52 functional-667319 containerd[5187]: time="2025-12-09T04:26:52.446963922Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 09 04:26:52 functional-667319 containerd[5187]: time="2025-12-09T04:26:52.447004126Z" level=info msg="Start subscribing containerd event"
	Dec 09 04:26:52 functional-667319 containerd[5187]: time="2025-12-09T04:26:52.447056186Z" level=info msg="Start recovering state"
	Dec 09 04:26:52 functional-667319 containerd[5187]: time="2025-12-09T04:26:52.486039443Z" level=info msg="Start event monitor"
	Dec 09 04:26:52 functional-667319 containerd[5187]: time="2025-12-09T04:26:52.486090379Z" level=info msg="Start cni network conf syncer for default"
	Dec 09 04:26:52 functional-667319 containerd[5187]: time="2025-12-09T04:26:52.486100127Z" level=info msg="Start streaming server"
	Dec 09 04:26:52 functional-667319 containerd[5187]: time="2025-12-09T04:26:52.486109505Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 09 04:26:52 functional-667319 containerd[5187]: time="2025-12-09T04:26:52.486121919Z" level=info msg="runtime interface starting up..."
	Dec 09 04:26:52 functional-667319 containerd[5187]: time="2025-12-09T04:26:52.486128778Z" level=info msg="starting plugins..."
	Dec 09 04:26:52 functional-667319 containerd[5187]: time="2025-12-09T04:26:52.486144646Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 09 04:26:52 functional-667319 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 09 04:26:52 functional-667319 containerd[5187]: time="2025-12-09T04:26:52.488088785Z" level=info msg="containerd successfully booted in 0.083246s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:32:56.767387    8426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:32:56.768096    8426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:32:56.769842    8426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:32:56.770435    8426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:32:56.771984    8426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 9 03:13] overlayfs: idmapped layers are currently not supported
	[ +25.904254] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:14] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:16] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:18] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:19] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:30] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:32] overlayfs: idmapped layers are currently not supported
	[ +28.114653] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:33] overlayfs: idmapped layers are currently not supported
	[ +23.720849] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:34] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:35] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:36] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:37] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:38] overlayfs: idmapped layers are currently not supported
	[ +23.656275] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:39] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:57] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:58] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:00] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:02] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:03] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:05] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:06] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 04:32:56 up  7:14,  0 user,  load average: 0.23, 0.25, 0.79
	Linux functional-667319 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 09 04:32:53 functional-667319 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 09 04:32:54 functional-667319 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 808.
	Dec 09 04:32:54 functional-667319 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:32:54 functional-667319 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:32:54 functional-667319 kubelet[8310]: E1209 04:32:54.472975    8310 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 09 04:32:54 functional-667319 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 09 04:32:54 functional-667319 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 09 04:32:55 functional-667319 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 809.
	Dec 09 04:32:55 functional-667319 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:32:55 functional-667319 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:32:55 functional-667319 kubelet[8316]: E1209 04:32:55.258641    8316 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 09 04:32:55 functional-667319 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 09 04:32:55 functional-667319 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 09 04:32:55 functional-667319 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 810.
	Dec 09 04:32:55 functional-667319 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:32:55 functional-667319 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:32:55 functional-667319 kubelet[8337]: E1209 04:32:55.989976    8337 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 09 04:32:55 functional-667319 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 09 04:32:55 functional-667319 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 09 04:32:56 functional-667319 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 811.
	Dec 09 04:32:56 functional-667319 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:32:56 functional-667319 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:32:56 functional-667319 kubelet[8418]: E1209 04:32:56.740675    8418 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 09 04:32:56 functional-667319 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 09 04:32:56 functional-667319 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-667319 -n functional-667319
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-667319 -n functional-667319: exit status 2 (348.996034ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-667319" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (367.82s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (2.22s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-667319 get po -A
functional_test.go:711: (dbg) Non-zero exit: kubectl --context functional-667319 get po -A: exit status 1 (62.139495ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:713: failed to get kubectl pods: args "kubectl --context functional-667319 get po -A" : exit status 1
functional_test.go:717: expected stderr to be empty but got *"The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?\n"*: args "kubectl --context functional-667319 get po -A"
functional_test.go:720: expected stdout to include *kube-system* but got *""*. args: "kubectl --context functional-667319 get po -A"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-667319
helpers_test.go:243: (dbg) docker inspect functional-667319:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e5b6511799c8d5c445a335a3bd5cc9a61b518fc27ac93dad8800da366ef32129",
	        "Created": "2025-12-09T04:18:34.060957311Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1182075,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-09T04:18:34.126944158Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:e4eb91ed18a24161fce60c7cdd660144ecd5b8c5029dc2dea2c5e423c2f48ce4",
	        "ResolvConfPath": "/var/lib/docker/containers/e5b6511799c8d5c445a335a3bd5cc9a61b518fc27ac93dad8800da366ef32129/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e5b6511799c8d5c445a335a3bd5cc9a61b518fc27ac93dad8800da366ef32129/hostname",
	        "HostsPath": "/var/lib/docker/containers/e5b6511799c8d5c445a335a3bd5cc9a61b518fc27ac93dad8800da366ef32129/hosts",
	        "LogPath": "/var/lib/docker/containers/e5b6511799c8d5c445a335a3bd5cc9a61b518fc27ac93dad8800da366ef32129/e5b6511799c8d5c445a335a3bd5cc9a61b518fc27ac93dad8800da366ef32129-json.log",
	        "Name": "/functional-667319",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-667319:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-667319",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e5b6511799c8d5c445a335a3bd5cc9a61b518fc27ac93dad8800da366ef32129",
	                "LowerDir": "/var/lib/docker/overlay2/b0239006282b6e4609a1f554d0a3fb94c749a13505795c8e4078cb2db194e8e0-init/diff:/var/lib/docker/overlay2/c44bb57aa59cc265266f37f2bb6e7ec0e7d641c3b4aeaa57e6d23deec6f0d1d4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b0239006282b6e4609a1f554d0a3fb94c749a13505795c8e4078cb2db194e8e0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b0239006282b6e4609a1f554d0a3fb94c749a13505795c8e4078cb2db194e8e0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b0239006282b6e4609a1f554d0a3fb94c749a13505795c8e4078cb2db194e8e0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-667319",
	                "Source": "/var/lib/docker/volumes/functional-667319/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-667319",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-667319",
	                "name.minikube.sigs.k8s.io": "functional-667319",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7c81dabcd9e57af9bce0bc0f5619f6ef3a27af43f4b649283a5bd778ab256415",
	            "SandboxKey": "/var/run/docker/netns/7c81dabcd9e5",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33900"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33901"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33904"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33902"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33903"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-667319": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "fe:40:bd:46:56:d8",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "88b3a65de70c15005c532a44219284d4df94e474ca5b78b04514c2f932b03beb",
	                    "EndpointID": "bdef7b156f4a28c1f641ae70b42db2750bb810ae6fe93fd65325e62eb232fe91",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-667319",
	                        "e5b6511799c8"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-667319 -n functional-667319
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-667319 -n functional-667319: exit status 2 (299.54841ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-667319 logs -n 25
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                              ARGS                                                                               │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-717497 ssh sudo cat /etc/ssl/certs/11442312.pem                                                                                                      │ functional-717497 │ jenkins │ v1.37.0 │ 09 Dec 25 04:18 UTC │ 09 Dec 25 04:18 UTC │
	│ image          │ functional-717497 image load --daemon kicbase/echo-server:functional-717497 --alsologtostderr                                                                   │ functional-717497 │ jenkins │ v1.37.0 │ 09 Dec 25 04:18 UTC │ 09 Dec 25 04:18 UTC │
	│ ssh            │ functional-717497 ssh sudo cat /usr/share/ca-certificates/11442312.pem                                                                                          │ functional-717497 │ jenkins │ v1.37.0 │ 09 Dec 25 04:18 UTC │ 09 Dec 25 04:18 UTC │
	│ ssh            │ functional-717497 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                        │ functional-717497 │ jenkins │ v1.37.0 │ 09 Dec 25 04:18 UTC │ 09 Dec 25 04:18 UTC │
	│ image          │ functional-717497 image ls                                                                                                                                      │ functional-717497 │ jenkins │ v1.37.0 │ 09 Dec 25 04:18 UTC │ 09 Dec 25 04:18 UTC │
	│ ssh            │ functional-717497 ssh sudo cat /etc/test/nested/copy/1144231/hosts                                                                                              │ functional-717497 │ jenkins │ v1.37.0 │ 09 Dec 25 04:18 UTC │ 09 Dec 25 04:18 UTC │
	│ image          │ functional-717497 image save kicbase/echo-server:functional-717497 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr │ functional-717497 │ jenkins │ v1.37.0 │ 09 Dec 25 04:18 UTC │ 09 Dec 25 04:18 UTC │
	│ image          │ functional-717497 image rm kicbase/echo-server:functional-717497 --alsologtostderr                                                                              │ functional-717497 │ jenkins │ v1.37.0 │ 09 Dec 25 04:18 UTC │ 09 Dec 25 04:18 UTC │
	│ image          │ functional-717497 image ls                                                                                                                                      │ functional-717497 │ jenkins │ v1.37.0 │ 09 Dec 25 04:18 UTC │ 09 Dec 25 04:18 UTC │
	│ image          │ functional-717497 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr                                       │ functional-717497 │ jenkins │ v1.37.0 │ 09 Dec 25 04:18 UTC │ 09 Dec 25 04:18 UTC │
	│ image          │ functional-717497 image ls                                                                                                                                      │ functional-717497 │ jenkins │ v1.37.0 │ 09 Dec 25 04:18 UTC │ 09 Dec 25 04:18 UTC │
	│ update-context │ functional-717497 update-context --alsologtostderr -v=2                                                                                                         │ functional-717497 │ jenkins │ v1.37.0 │ 09 Dec 25 04:18 UTC │ 09 Dec 25 04:18 UTC │
	│ image          │ functional-717497 image save --daemon kicbase/echo-server:functional-717497 --alsologtostderr                                                                   │ functional-717497 │ jenkins │ v1.37.0 │ 09 Dec 25 04:18 UTC │ 09 Dec 25 04:18 UTC │
	│ update-context │ functional-717497 update-context --alsologtostderr -v=2                                                                                                         │ functional-717497 │ jenkins │ v1.37.0 │ 09 Dec 25 04:18 UTC │ 09 Dec 25 04:18 UTC │
	│ update-context │ functional-717497 update-context --alsologtostderr -v=2                                                                                                         │ functional-717497 │ jenkins │ v1.37.0 │ 09 Dec 25 04:18 UTC │ 09 Dec 25 04:18 UTC │
	│ image          │ functional-717497 image ls --format short --alsologtostderr                                                                                                     │ functional-717497 │ jenkins │ v1.37.0 │ 09 Dec 25 04:18 UTC │ 09 Dec 25 04:18 UTC │
	│ image          │ functional-717497 image ls --format yaml --alsologtostderr                                                                                                      │ functional-717497 │ jenkins │ v1.37.0 │ 09 Dec 25 04:18 UTC │ 09 Dec 25 04:18 UTC │
	│ ssh            │ functional-717497 ssh pgrep buildkitd                                                                                                                           │ functional-717497 │ jenkins │ v1.37.0 │ 09 Dec 25 04:18 UTC │                     │
	│ image          │ functional-717497 image ls --format json --alsologtostderr                                                                                                      │ functional-717497 │ jenkins │ v1.37.0 │ 09 Dec 25 04:18 UTC │ 09 Dec 25 04:18 UTC │
	│ image          │ functional-717497 image build -t localhost/my-image:functional-717497 testdata/build --alsologtostderr                                                          │ functional-717497 │ jenkins │ v1.37.0 │ 09 Dec 25 04:18 UTC │ 09 Dec 25 04:18 UTC │
	│ image          │ functional-717497 image ls --format table --alsologtostderr                                                                                                     │ functional-717497 │ jenkins │ v1.37.0 │ 09 Dec 25 04:18 UTC │ 09 Dec 25 04:18 UTC │
	│ image          │ functional-717497 image ls                                                                                                                                      │ functional-717497 │ jenkins │ v1.37.0 │ 09 Dec 25 04:18 UTC │ 09 Dec 25 04:18 UTC │
	│ delete         │ -p functional-717497                                                                                                                                            │ functional-717497 │ jenkins │ v1.37.0 │ 09 Dec 25 04:18 UTC │ 09 Dec 25 04:18 UTC │
	│ start          │ -p functional-667319 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0         │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:18 UTC │                     │
	│ start          │ -p functional-667319 --alsologtostderr -v=8                                                                                                                     │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:26 UTC │                     │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/09 04:26:49
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 04:26:49.901158 1187425 out.go:360] Setting OutFile to fd 1 ...
	I1209 04:26:49.901350 1187425 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 04:26:49.901380 1187425 out.go:374] Setting ErrFile to fd 2...
	I1209 04:26:49.901407 1187425 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 04:26:49.902126 1187425 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-1142328/.minikube/bin
	I1209 04:26:49.902570 1187425 out.go:368] Setting JSON to false
	I1209 04:26:49.903455 1187425 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":25733,"bootTime":1765228677,"procs":156,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1209 04:26:49.903532 1187425 start.go:143] virtualization:  
	I1209 04:26:49.907035 1187425 out.go:179] * [functional-667319] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1209 04:26:49.910766 1187425 out.go:179]   - MINIKUBE_LOCATION=22081
	I1209 04:26:49.910878 1187425 notify.go:221] Checking for updates...
	I1209 04:26:49.916570 1187425 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 04:26:49.919423 1187425 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22081-1142328/kubeconfig
	I1209 04:26:49.922184 1187425 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-1142328/.minikube
	I1209 04:26:49.924947 1187425 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1209 04:26:49.927723 1187425 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 04:26:49.930999 1187425 config.go:182] Loaded profile config "functional-667319": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1209 04:26:49.931139 1187425 driver.go:422] Setting default libvirt URI to qemu:///system
	I1209 04:26:49.958230 1187425 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1209 04:26:49.958344 1187425 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 04:26:50.018007 1187425 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-09 04:26:50.006695366 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1209 04:26:50.018130 1187425 docker.go:319] overlay module found
	I1209 04:26:50.021068 1187425 out.go:179] * Using the docker driver based on existing profile
	I1209 04:26:50.024068 1187425 start.go:309] selected driver: docker
	I1209 04:26:50.024096 1187425 start.go:927] validating driver "docker" against &{Name:functional-667319 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-667319 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disa
bleCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 04:26:50.024203 1187425 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 04:26:50.024322 1187425 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 04:26:50.086853 1187425 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-09 04:26:50.07716198 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1209 04:26:50.087299 1187425 cni.go:84] Creating CNI manager for ""
	I1209 04:26:50.087371 1187425 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1209 04:26:50.087429 1187425 start.go:353] cluster config:
	{Name:functional-667319 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-667319 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPa
th: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 04:26:50.090570 1187425 out.go:179] * Starting "functional-667319" primary control-plane node in "functional-667319" cluster
	I1209 04:26:50.093453 1187425 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1209 04:26:50.098431 1187425 out.go:179] * Pulling base image v0.0.48-1765184860-22066 ...
	I1209 04:26:50.101405 1187425 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1209 04:26:50.101471 1187425 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local docker daemon
	I1209 04:26:50.101485 1187425 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1209 04:26:50.101503 1187425 cache.go:65] Caching tarball of preloaded images
	I1209 04:26:50.101600 1187425 preload.go:238] Found /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1209 04:26:50.101616 1187425 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1209 04:26:50.101720 1187425 profile.go:143] Saving config to /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/config.json ...
	I1209 04:26:50.125607 1187425 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local docker daemon, skipping pull
	I1209 04:26:50.125633 1187425 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c exists in daemon, skipping load
	I1209 04:26:50.125648 1187425 cache.go:243] Successfully downloaded all kic artifacts
	I1209 04:26:50.125680 1187425 start.go:360] acquireMachinesLock for functional-667319: {Name:mk6c31f0747796f5f8ac8ea1653d6ee60fe2a47d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 04:26:50.125839 1187425 start.go:364] duration metric: took 130.318µs to acquireMachinesLock for "functional-667319"
	I1209 04:26:50.125869 1187425 start.go:96] Skipping create...Using existing machine configuration
	I1209 04:26:50.125878 1187425 fix.go:54] fixHost starting: 
	I1209 04:26:50.126147 1187425 cli_runner.go:164] Run: docker container inspect functional-667319 --format={{.State.Status}}
	I1209 04:26:50.147043 1187425 fix.go:112] recreateIfNeeded on functional-667319: state=Running err=<nil>
	W1209 04:26:50.147073 1187425 fix.go:138] unexpected machine state, will restart: <nil>
	I1209 04:26:50.150254 1187425 out.go:252] * Updating the running docker "functional-667319" container ...
	I1209 04:26:50.150291 1187425 machine.go:94] provisionDockerMachine start ...
	I1209 04:26:50.150379 1187425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-667319
	I1209 04:26:50.167513 1187425 main.go:143] libmachine: Using SSH client type: native
	I1209 04:26:50.167851 1187425 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 33900 <nil> <nil>}
	I1209 04:26:50.167868 1187425 main.go:143] libmachine: About to run SSH command:
	hostname
	I1209 04:26:50.327552 1187425 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-667319
	
	I1209 04:26:50.327578 1187425 ubuntu.go:182] provisioning hostname "functional-667319"
	I1209 04:26:50.327642 1187425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-667319
	I1209 04:26:50.345440 1187425 main.go:143] libmachine: Using SSH client type: native
	I1209 04:26:50.345757 1187425 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 33900 <nil> <nil>}
	I1209 04:26:50.345775 1187425 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-667319 && echo "functional-667319" | sudo tee /etc/hostname
	I1209 04:26:50.504917 1187425 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-667319
	
	I1209 04:26:50.505070 1187425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-667319
	I1209 04:26:50.522734 1187425 main.go:143] libmachine: Using SSH client type: native
	I1209 04:26:50.523054 1187425 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 33900 <nil> <nil>}
	I1209 04:26:50.523070 1187425 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-667319' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-667319/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-667319' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 04:26:50.676107 1187425 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1209 04:26:50.676133 1187425 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22081-1142328/.minikube CaCertPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22081-1142328/.minikube}
	I1209 04:26:50.676165 1187425 ubuntu.go:190] setting up certificates
	I1209 04:26:50.676182 1187425 provision.go:84] configureAuth start
	I1209 04:26:50.676245 1187425 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-667319
	I1209 04:26:50.692809 1187425 provision.go:143] copyHostCerts
	I1209 04:26:50.692850 1187425 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.pem
	I1209 04:26:50.692881 1187425 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.pem, removing ...
	I1209 04:26:50.692892 1187425 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.pem
	I1209 04:26:50.692964 1187425 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.pem (1078 bytes)
	I1209 04:26:50.693060 1187425 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22081-1142328/.minikube/cert.pem
	I1209 04:26:50.693088 1187425 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-1142328/.minikube/cert.pem, removing ...
	I1209 04:26:50.693096 1187425 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-1142328/.minikube/cert.pem
	I1209 04:26:50.693122 1187425 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22081-1142328/.minikube/cert.pem (1123 bytes)
	I1209 04:26:50.693175 1187425 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22081-1142328/.minikube/key.pem
	I1209 04:26:50.693199 1187425 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-1142328/.minikube/key.pem, removing ...
	I1209 04:26:50.693206 1187425 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-1142328/.minikube/key.pem
	I1209 04:26:50.693233 1187425 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22081-1142328/.minikube/key.pem (1675 bytes)
	I1209 04:26:50.693287 1187425 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca-key.pem org=jenkins.functional-667319 san=[127.0.0.1 192.168.49.2 functional-667319 localhost minikube]
	I1209 04:26:50.808459 1187425 provision.go:177] copyRemoteCerts
	I1209 04:26:50.808521 1187425 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 04:26:50.808568 1187425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-667319
	I1209 04:26:50.825015 1187425 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/functional-667319/id_rsa Username:docker}
	I1209 04:26:50.931904 1187425 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1209 04:26:50.931970 1187425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1209 04:26:50.950373 1187425 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1209 04:26:50.950430 1187425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1209 04:26:50.967052 1187425 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1209 04:26:50.967110 1187425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1209 04:26:50.984302 1187425 provision.go:87] duration metric: took 308.098174ms to configureAuth
	I1209 04:26:50.984386 1187425 ubuntu.go:206] setting minikube options for container-runtime
	I1209 04:26:50.984596 1187425 config.go:182] Loaded profile config "functional-667319": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1209 04:26:50.984634 1187425 machine.go:97] duration metric: took 834.335015ms to provisionDockerMachine
	I1209 04:26:50.984656 1187425 start.go:293] postStartSetup for "functional-667319" (driver="docker")
	I1209 04:26:50.984680 1187425 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 04:26:50.984759 1187425 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 04:26:50.984834 1187425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-667319
	I1209 04:26:51.005808 1187425 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/functional-667319/id_rsa Username:docker}
	I1209 04:26:51.112821 1187425 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 04:26:51.116496 1187425 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1209 04:26:51.116518 1187425 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1209 04:26:51.116523 1187425 command_runner.go:130] > VERSION_ID="12"
	I1209 04:26:51.116528 1187425 command_runner.go:130] > VERSION="12 (bookworm)"
	I1209 04:26:51.116532 1187425 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1209 04:26:51.116536 1187425 command_runner.go:130] > ID=debian
	I1209 04:26:51.116540 1187425 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1209 04:26:51.116545 1187425 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1209 04:26:51.116554 1187425 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1209 04:26:51.116627 1187425 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1209 04:26:51.116648 1187425 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1209 04:26:51.116659 1187425 filesync.go:126] Scanning /home/jenkins/minikube-integration/22081-1142328/.minikube/addons for local assets ...
	I1209 04:26:51.116715 1187425 filesync.go:126] Scanning /home/jenkins/minikube-integration/22081-1142328/.minikube/files for local assets ...
	I1209 04:26:51.116799 1187425 filesync.go:149] local asset: /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/ssl/certs/11442312.pem -> 11442312.pem in /etc/ssl/certs
	I1209 04:26:51.116806 1187425 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/ssl/certs/11442312.pem -> /etc/ssl/certs/11442312.pem
	I1209 04:26:51.116882 1187425 filesync.go:149] local asset: /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/test/nested/copy/1144231/hosts -> hosts in /etc/test/nested/copy/1144231
	I1209 04:26:51.116886 1187425 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/test/nested/copy/1144231/hosts -> /etc/test/nested/copy/1144231/hosts
	I1209 04:26:51.116933 1187425 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1144231
	I1209 04:26:51.124908 1187425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/ssl/certs/11442312.pem --> /etc/ssl/certs/11442312.pem (1708 bytes)
	I1209 04:26:51.143368 1187425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/test/nested/copy/1144231/hosts --> /etc/test/nested/copy/1144231/hosts (40 bytes)
	I1209 04:26:51.161824 1187425 start.go:296] duration metric: took 177.139225ms for postStartSetup
	I1209 04:26:51.161916 1187425 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1209 04:26:51.161982 1187425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-667319
	I1209 04:26:51.181271 1187425 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/functional-667319/id_rsa Username:docker}
	I1209 04:26:51.284406 1187425 command_runner.go:130] > 12%
	I1209 04:26:51.284922 1187425 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1209 04:26:51.288619 1187425 command_runner.go:130] > 172G
	I1209 04:26:51.288953 1187425 fix.go:56] duration metric: took 1.163071262s for fixHost
	I1209 04:26:51.288968 1187425 start.go:83] releasing machines lock for "functional-667319", held for 1.163111146s
	I1209 04:26:51.289042 1187425 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-667319
	I1209 04:26:51.305835 1187425 ssh_runner.go:195] Run: cat /version.json
	I1209 04:26:51.305885 1187425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-667319
	I1209 04:26:51.305897 1187425 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 04:26:51.305950 1187425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-667319
	I1209 04:26:51.325384 1187425 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/functional-667319/id_rsa Username:docker}
	I1209 04:26:51.327293 1187425 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/functional-667319/id_rsa Username:docker}
	I1209 04:26:51.427270 1187425 command_runner.go:130] > {"iso_version": "v1.37.0-1764843329-22032", "kicbase_version": "v0.0.48-1765184860-22066", "minikube_version": "v1.37.0", "commit": "27bcd52be11288bda2f9abde063aa47b22607695"}
	I1209 04:26:51.427541 1187425 ssh_runner.go:195] Run: systemctl --version
	I1209 04:26:51.517549 1187425 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1209 04:26:51.520210 1187425 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1209 04:26:51.520243 1187425 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1209 04:26:51.520320 1187425 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1209 04:26:51.524536 1187425 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1209 04:26:51.524574 1187425 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 04:26:51.524644 1187425 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 04:26:51.532138 1187425 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1209 04:26:51.532170 1187425 start.go:496] detecting cgroup driver to use...
	I1209 04:26:51.532202 1187425 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1209 04:26:51.532264 1187425 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1209 04:26:51.547055 1187425 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1209 04:26:51.559544 1187425 docker.go:218] disabling cri-docker service (if available) ...
	I1209 04:26:51.559644 1187425 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 04:26:51.574821 1187425 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 04:26:51.587447 1187425 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 04:26:51.703845 1187425 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 04:26:51.839863 1187425 docker.go:234] disabling docker service ...
	I1209 04:26:51.839930 1187425 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 04:26:51.856255 1187425 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 04:26:51.869081 1187425 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 04:26:51.995560 1187425 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 04:26:52.125293 1187425 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 04:26:52.137749 1187425 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 04:26:52.150135 1187425 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1209 04:26:52.151507 1187425 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1209 04:26:52.160197 1187425 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1209 04:26:52.168921 1187425 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1209 04:26:52.169008 1187425 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1209 04:26:52.177592 1187425 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1209 04:26:52.185997 1187425 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1209 04:26:52.194259 1187425 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1209 04:26:52.202620 1187425 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 04:26:52.210466 1187425 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1209 04:26:52.219232 1187425 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1209 04:26:52.227579 1187425 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1209 04:26:52.236059 1187425 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 04:26:52.242619 1187425 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1209 04:26:52.243485 1187425 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 04:26:52.250890 1187425 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 04:26:52.361246 1187425 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1209 04:26:52.490552 1187425 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1209 04:26:52.490653 1187425 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1209 04:26:52.497112 1187425 command_runner.go:130] >   File: /run/containerd/containerd.sock
	I1209 04:26:52.497174 1187425 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1209 04:26:52.497206 1187425 command_runner.go:130] > Device: 0,72	Inode: 1613        Links: 1
	I1209 04:26:52.497227 1187425 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1209 04:26:52.497247 1187425 command_runner.go:130] > Access: 2025-12-09 04:26:52.442263978 +0000
	I1209 04:26:52.497281 1187425 command_runner.go:130] > Modify: 2025-12-09 04:26:52.442263978 +0000
	I1209 04:26:52.497301 1187425 command_runner.go:130] > Change: 2025-12-09 04:26:52.442263978 +0000
	I1209 04:26:52.497319 1187425 command_runner.go:130] >  Birth: -
	I1209 04:26:52.497534 1187425 start.go:564] Will wait 60s for crictl version
	I1209 04:26:52.497619 1187425 ssh_runner.go:195] Run: which crictl
	I1209 04:26:52.501257 1187425 command_runner.go:130] > /usr/local/bin/crictl
	I1209 04:26:52.502001 1187425 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1209 04:26:52.535942 1187425 command_runner.go:130] > Version:  0.1.0
	I1209 04:26:52.535964 1187425 command_runner.go:130] > RuntimeName:  containerd
	I1209 04:26:52.535970 1187425 command_runner.go:130] > RuntimeVersion:  v2.2.0
	I1209 04:26:52.535975 1187425 command_runner.go:130] > RuntimeApiVersion:  v1
	I1209 04:26:52.535985 1187425 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1209 04:26:52.536096 1187425 ssh_runner.go:195] Run: containerd --version
	I1209 04:26:52.556939 1187425 command_runner.go:130] > containerd containerd.io v2.2.0 1c4457e00facac03ce1d75f7b6777a7a851e5c41
	I1209 04:26:52.562389 1187425 ssh_runner.go:195] Run: containerd --version
	I1209 04:26:52.582187 1187425 command_runner.go:130] > containerd containerd.io v2.2.0 1c4457e00facac03ce1d75f7b6777a7a851e5c41
	I1209 04:26:52.587659 1187425 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1209 04:26:52.590705 1187425 cli_runner.go:164] Run: docker network inspect functional-667319 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1209 04:26:52.606900 1187425 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1209 04:26:52.610849 1187425 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1209 04:26:52.610974 1187425 kubeadm.go:884] updating cluster {Name:functional-667319 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-667319 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 04:26:52.611074 1187425 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1209 04:26:52.611135 1187425 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 04:26:52.634142 1187425 command_runner.go:130] > {
	I1209 04:26:52.634161 1187425 command_runner.go:130] >   "images":  [
	I1209 04:26:52.634166 1187425 command_runner.go:130] >     {
	I1209 04:26:52.634175 1187425 command_runner.go:130] >       "id":  "sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1209 04:26:52.634180 1187425 command_runner.go:130] >       "repoTags":  [
	I1209 04:26:52.634186 1187425 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1209 04:26:52.634190 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.634194 1187425 command_runner.go:130] >       "repoDigests":  [
	I1209 04:26:52.634210 1187425 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"
	I1209 04:26:52.634213 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.634218 1187425 command_runner.go:130] >       "size":  "40636774",
	I1209 04:26:52.634222 1187425 command_runner.go:130] >       "username":  "",
	I1209 04:26:52.634230 1187425 command_runner.go:130] >       "pinned":  false
	I1209 04:26:52.634233 1187425 command_runner.go:130] >     },
	I1209 04:26:52.634236 1187425 command_runner.go:130] >     {
	I1209 04:26:52.634246 1187425 command_runner.go:130] >       "id":  "sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1209 04:26:52.634251 1187425 command_runner.go:130] >       "repoTags":  [
	I1209 04:26:52.634256 1187425 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1209 04:26:52.634259 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.634263 1187425 command_runner.go:130] >       "repoDigests":  [
	I1209 04:26:52.634271 1187425 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1209 04:26:52.634274 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.634278 1187425 command_runner.go:130] >       "size":  "8034419",
	I1209 04:26:52.634282 1187425 command_runner.go:130] >       "username":  "",
	I1209 04:26:52.634286 1187425 command_runner.go:130] >       "pinned":  false
	I1209 04:26:52.634289 1187425 command_runner.go:130] >     },
	I1209 04:26:52.634292 1187425 command_runner.go:130] >     {
	I1209 04:26:52.634298 1187425 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1209 04:26:52.634302 1187425 command_runner.go:130] >       "repoTags":  [
	I1209 04:26:52.634307 1187425 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1209 04:26:52.634310 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.634317 1187425 command_runner.go:130] >       "repoDigests":  [
	I1209 04:26:52.634325 1187425 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"
	I1209 04:26:52.634328 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.634333 1187425 command_runner.go:130] >       "size":  "21168808",
	I1209 04:26:52.634337 1187425 command_runner.go:130] >       "username":  "nonroot",
	I1209 04:26:52.634341 1187425 command_runner.go:130] >       "pinned":  false
	I1209 04:26:52.634349 1187425 command_runner.go:130] >     },
	I1209 04:26:52.634355 1187425 command_runner.go:130] >     {
	I1209 04:26:52.634362 1187425 command_runner.go:130] >       "id":  "sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1209 04:26:52.634367 1187425 command_runner.go:130] >       "repoTags":  [
	I1209 04:26:52.634372 1187425 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1209 04:26:52.634375 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.634379 1187425 command_runner.go:130] >       "repoDigests":  [
	I1209 04:26:52.634387 1187425 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"
	I1209 04:26:52.634393 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.634397 1187425 command_runner.go:130] >       "size":  "21136588",
	I1209 04:26:52.634402 1187425 command_runner.go:130] >       "uid":  {
	I1209 04:26:52.634405 1187425 command_runner.go:130] >         "value":  "0"
	I1209 04:26:52.634408 1187425 command_runner.go:130] >       },
	I1209 04:26:52.634412 1187425 command_runner.go:130] >       "username":  "",
	I1209 04:26:52.634415 1187425 command_runner.go:130] >       "pinned":  false
	I1209 04:26:52.634418 1187425 command_runner.go:130] >     },
	I1209 04:26:52.634421 1187425 command_runner.go:130] >     {
	I1209 04:26:52.634428 1187425 command_runner.go:130] >       "id":  "sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1209 04:26:52.634431 1187425 command_runner.go:130] >       "repoTags":  [
	I1209 04:26:52.634437 1187425 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1209 04:26:52.634440 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.634443 1187425 command_runner.go:130] >       "repoDigests":  [
	I1209 04:26:52.634451 1187425 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58"
	I1209 04:26:52.634453 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.634457 1187425 command_runner.go:130] >       "size":  "24678359",
	I1209 04:26:52.634461 1187425 command_runner.go:130] >       "uid":  {
	I1209 04:26:52.634468 1187425 command_runner.go:130] >         "value":  "0"
	I1209 04:26:52.634471 1187425 command_runner.go:130] >       },
	I1209 04:26:52.634474 1187425 command_runner.go:130] >       "username":  "",
	I1209 04:26:52.634478 1187425 command_runner.go:130] >       "pinned":  false
	I1209 04:26:52.634480 1187425 command_runner.go:130] >     },
	I1209 04:26:52.634483 1187425 command_runner.go:130] >     {
	I1209 04:26:52.634490 1187425 command_runner.go:130] >       "id":  "sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1209 04:26:52.634493 1187425 command_runner.go:130] >       "repoTags":  [
	I1209 04:26:52.634499 1187425 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1209 04:26:52.634501 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.634505 1187425 command_runner.go:130] >       "repoDigests":  [
	I1209 04:26:52.634513 1187425 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d"
	I1209 04:26:52.634516 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.634520 1187425 command_runner.go:130] >       "size":  "20661043",
	I1209 04:26:52.634523 1187425 command_runner.go:130] >       "uid":  {
	I1209 04:26:52.634532 1187425 command_runner.go:130] >         "value":  "0"
	I1209 04:26:52.634535 1187425 command_runner.go:130] >       },
	I1209 04:26:52.634539 1187425 command_runner.go:130] >       "username":  "",
	I1209 04:26:52.634543 1187425 command_runner.go:130] >       "pinned":  false
	I1209 04:26:52.634546 1187425 command_runner.go:130] >     },
	I1209 04:26:52.634548 1187425 command_runner.go:130] >     {
	I1209 04:26:52.634555 1187425 command_runner.go:130] >       "id":  "sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1209 04:26:52.634558 1187425 command_runner.go:130] >       "repoTags":  [
	I1209 04:26:52.634563 1187425 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1209 04:26:52.634566 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.634569 1187425 command_runner.go:130] >       "repoDigests":  [
	I1209 04:26:52.634577 1187425 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1209 04:26:52.634580 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.634583 1187425 command_runner.go:130] >       "size":  "22429671",
	I1209 04:26:52.634587 1187425 command_runner.go:130] >       "username":  "",
	I1209 04:26:52.634591 1187425 command_runner.go:130] >       "pinned":  false
	I1209 04:26:52.634594 1187425 command_runner.go:130] >     },
	I1209 04:26:52.634597 1187425 command_runner.go:130] >     {
	I1209 04:26:52.634604 1187425 command_runner.go:130] >       "id":  "sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1209 04:26:52.634607 1187425 command_runner.go:130] >       "repoTags":  [
	I1209 04:26:52.634613 1187425 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1209 04:26:52.634616 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.634620 1187425 command_runner.go:130] >       "repoDigests":  [
	I1209 04:26:52.634627 1187425 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6"
	I1209 04:26:52.634630 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.634634 1187425 command_runner.go:130] >       "size":  "15391364",
	I1209 04:26:52.634638 1187425 command_runner.go:130] >       "uid":  {
	I1209 04:26:52.634641 1187425 command_runner.go:130] >         "value":  "0"
	I1209 04:26:52.634644 1187425 command_runner.go:130] >       },
	I1209 04:26:52.634649 1187425 command_runner.go:130] >       "username":  "",
	I1209 04:26:52.634653 1187425 command_runner.go:130] >       "pinned":  false
	I1209 04:26:52.634655 1187425 command_runner.go:130] >     },
	I1209 04:26:52.634659 1187425 command_runner.go:130] >     {
	I1209 04:26:52.634670 1187425 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1209 04:26:52.634674 1187425 command_runner.go:130] >       "repoTags":  [
	I1209 04:26:52.634678 1187425 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1209 04:26:52.634681 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.634685 1187425 command_runner.go:130] >       "repoDigests":  [
	I1209 04:26:52.634693 1187425 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"
	I1209 04:26:52.634695 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.634699 1187425 command_runner.go:130] >       "size":  "267939",
	I1209 04:26:52.634703 1187425 command_runner.go:130] >       "uid":  {
	I1209 04:26:52.634706 1187425 command_runner.go:130] >         "value":  "65535"
	I1209 04:26:52.634709 1187425 command_runner.go:130] >       },
	I1209 04:26:52.634713 1187425 command_runner.go:130] >       "username":  "",
	I1209 04:26:52.634717 1187425 command_runner.go:130] >       "pinned":  true
	I1209 04:26:52.634720 1187425 command_runner.go:130] >     }
	I1209 04:26:52.634723 1187425 command_runner.go:130] >   ]
	I1209 04:26:52.634726 1187425 command_runner.go:130] > }
	I1209 04:26:52.636238 1187425 containerd.go:627] all images are preloaded for containerd runtime.
	I1209 04:26:52.636265 1187425 containerd.go:534] Images already preloaded, skipping extraction
	I1209 04:26:52.636328 1187425 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 04:26:52.662300 1187425 command_runner.go:130] > {
	I1209 04:26:52.662318 1187425 command_runner.go:130] >   "images":  [
	I1209 04:26:52.662323 1187425 command_runner.go:130] >     {
	I1209 04:26:52.662332 1187425 command_runner.go:130] >       "id":  "sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1209 04:26:52.662349 1187425 command_runner.go:130] >       "repoTags":  [
	I1209 04:26:52.662355 1187425 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1209 04:26:52.662358 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.662363 1187425 command_runner.go:130] >       "repoDigests":  [
	I1209 04:26:52.662375 1187425 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"
	I1209 04:26:52.662379 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.662383 1187425 command_runner.go:130] >       "size":  "40636774",
	I1209 04:26:52.662388 1187425 command_runner.go:130] >       "username":  "",
	I1209 04:26:52.662392 1187425 command_runner.go:130] >       "pinned":  false
	I1209 04:26:52.662395 1187425 command_runner.go:130] >     },
	I1209 04:26:52.662398 1187425 command_runner.go:130] >     {
	I1209 04:26:52.662406 1187425 command_runner.go:130] >       "id":  "sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1209 04:26:52.662410 1187425 command_runner.go:130] >       "repoTags":  [
	I1209 04:26:52.662416 1187425 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1209 04:26:52.662420 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.662424 1187425 command_runner.go:130] >       "repoDigests":  [
	I1209 04:26:52.662436 1187425 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1209 04:26:52.662440 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.662444 1187425 command_runner.go:130] >       "size":  "8034419",
	I1209 04:26:52.662448 1187425 command_runner.go:130] >       "username":  "",
	I1209 04:26:52.662452 1187425 command_runner.go:130] >       "pinned":  false
	I1209 04:26:52.662460 1187425 command_runner.go:130] >     },
	I1209 04:26:52.662463 1187425 command_runner.go:130] >     {
	I1209 04:26:52.662470 1187425 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1209 04:26:52.662474 1187425 command_runner.go:130] >       "repoTags":  [
	I1209 04:26:52.662479 1187425 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1209 04:26:52.662482 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.662488 1187425 command_runner.go:130] >       "repoDigests":  [
	I1209 04:26:52.662496 1187425 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"
	I1209 04:26:52.662500 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.662504 1187425 command_runner.go:130] >       "size":  "21168808",
	I1209 04:26:52.662508 1187425 command_runner.go:130] >       "username":  "nonroot",
	I1209 04:26:52.662512 1187425 command_runner.go:130] >       "pinned":  false
	I1209 04:26:52.662515 1187425 command_runner.go:130] >     },
	I1209 04:26:52.662519 1187425 command_runner.go:130] >     {
	I1209 04:26:52.662525 1187425 command_runner.go:130] >       "id":  "sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1209 04:26:52.662529 1187425 command_runner.go:130] >       "repoTags":  [
	I1209 04:26:52.662534 1187425 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1209 04:26:52.662538 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.662541 1187425 command_runner.go:130] >       "repoDigests":  [
	I1209 04:26:52.662549 1187425 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"
	I1209 04:26:52.662552 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.662556 1187425 command_runner.go:130] >       "size":  "21136588",
	I1209 04:26:52.662561 1187425 command_runner.go:130] >       "uid":  {
	I1209 04:26:52.662565 1187425 command_runner.go:130] >         "value":  "0"
	I1209 04:26:52.662568 1187425 command_runner.go:130] >       },
	I1209 04:26:52.662572 1187425 command_runner.go:130] >       "username":  "",
	I1209 04:26:52.662576 1187425 command_runner.go:130] >       "pinned":  false
	I1209 04:26:52.662579 1187425 command_runner.go:130] >     },
	I1209 04:26:52.662585 1187425 command_runner.go:130] >     {
	I1209 04:26:52.662592 1187425 command_runner.go:130] >       "id":  "sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1209 04:26:52.662596 1187425 command_runner.go:130] >       "repoTags":  [
	I1209 04:26:52.662601 1187425 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1209 04:26:52.662605 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.662609 1187425 command_runner.go:130] >       "repoDigests":  [
	I1209 04:26:52.662617 1187425 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58"
	I1209 04:26:52.662619 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.662624 1187425 command_runner.go:130] >       "size":  "24678359",
	I1209 04:26:52.662627 1187425 command_runner.go:130] >       "uid":  {
	I1209 04:26:52.662639 1187425 command_runner.go:130] >         "value":  "0"
	I1209 04:26:52.662642 1187425 command_runner.go:130] >       },
	I1209 04:26:52.662646 1187425 command_runner.go:130] >       "username":  "",
	I1209 04:26:52.662650 1187425 command_runner.go:130] >       "pinned":  false
	I1209 04:26:52.662653 1187425 command_runner.go:130] >     },
	I1209 04:26:52.662656 1187425 command_runner.go:130] >     {
	I1209 04:26:52.662663 1187425 command_runner.go:130] >       "id":  "sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1209 04:26:52.662667 1187425 command_runner.go:130] >       "repoTags":  [
	I1209 04:26:52.662672 1187425 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1209 04:26:52.662675 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.662679 1187425 command_runner.go:130] >       "repoDigests":  [
	I1209 04:26:52.662687 1187425 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d"
	I1209 04:26:52.662690 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.662694 1187425 command_runner.go:130] >       "size":  "20661043",
	I1209 04:26:52.662697 1187425 command_runner.go:130] >       "uid":  {
	I1209 04:26:52.662701 1187425 command_runner.go:130] >         "value":  "0"
	I1209 04:26:52.662704 1187425 command_runner.go:130] >       },
	I1209 04:26:52.662707 1187425 command_runner.go:130] >       "username":  "",
	I1209 04:26:52.662712 1187425 command_runner.go:130] >       "pinned":  false
	I1209 04:26:52.662714 1187425 command_runner.go:130] >     },
	I1209 04:26:52.662717 1187425 command_runner.go:130] >     {
	I1209 04:26:52.662725 1187425 command_runner.go:130] >       "id":  "sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1209 04:26:52.662729 1187425 command_runner.go:130] >       "repoTags":  [
	I1209 04:26:52.662737 1187425 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1209 04:26:52.662741 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.662744 1187425 command_runner.go:130] >       "repoDigests":  [
	I1209 04:26:52.662752 1187425 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1209 04:26:52.662755 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.662759 1187425 command_runner.go:130] >       "size":  "22429671",
	I1209 04:26:52.662763 1187425 command_runner.go:130] >       "username":  "",
	I1209 04:26:52.662767 1187425 command_runner.go:130] >       "pinned":  false
	I1209 04:26:52.662770 1187425 command_runner.go:130] >     },
	I1209 04:26:52.662774 1187425 command_runner.go:130] >     {
	I1209 04:26:52.662781 1187425 command_runner.go:130] >       "id":  "sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1209 04:26:52.662785 1187425 command_runner.go:130] >       "repoTags":  [
	I1209 04:26:52.662791 1187425 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1209 04:26:52.662794 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.662798 1187425 command_runner.go:130] >       "repoDigests":  [
	I1209 04:26:52.662805 1187425 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6"
	I1209 04:26:52.662808 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.662813 1187425 command_runner.go:130] >       "size":  "15391364",
	I1209 04:26:52.662816 1187425 command_runner.go:130] >       "uid":  {
	I1209 04:26:52.662820 1187425 command_runner.go:130] >         "value":  "0"
	I1209 04:26:52.662823 1187425 command_runner.go:130] >       },
	I1209 04:26:52.662827 1187425 command_runner.go:130] >       "username":  "",
	I1209 04:26:52.662831 1187425 command_runner.go:130] >       "pinned":  false
	I1209 04:26:52.662834 1187425 command_runner.go:130] >     },
	I1209 04:26:52.662837 1187425 command_runner.go:130] >     {
	I1209 04:26:52.662843 1187425 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1209 04:26:52.662847 1187425 command_runner.go:130] >       "repoTags":  [
	I1209 04:26:52.662852 1187425 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1209 04:26:52.662855 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.662858 1187425 command_runner.go:130] >       "repoDigests":  [
	I1209 04:26:52.662866 1187425 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"
	I1209 04:26:52.662869 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.662873 1187425 command_runner.go:130] >       "size":  "267939",
	I1209 04:26:52.662881 1187425 command_runner.go:130] >       "uid":  {
	I1209 04:26:52.662886 1187425 command_runner.go:130] >         "value":  "65535"
	I1209 04:26:52.662890 1187425 command_runner.go:130] >       },
	I1209 04:26:52.662894 1187425 command_runner.go:130] >       "username":  "",
	I1209 04:26:52.662897 1187425 command_runner.go:130] >       "pinned":  true
	I1209 04:26:52.662900 1187425 command_runner.go:130] >     }
	I1209 04:26:52.662903 1187425 command_runner.go:130] >   ]
	I1209 04:26:52.662906 1187425 command_runner.go:130] > }
	I1209 04:26:52.665193 1187425 containerd.go:627] all images are preloaded for containerd runtime.
	I1209 04:26:52.665212 1187425 cache_images.go:86] Images are preloaded, skipping loading
	I1209 04:26:52.665219 1187425 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
	I1209 04:26:52.665322 1187425 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-667319 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-667319 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 04:26:52.665384 1187425 ssh_runner.go:195] Run: sudo crictl info
	I1209 04:26:52.686718 1187425 command_runner.go:130] > {
	I1209 04:26:52.686786 1187425 command_runner.go:130] >   "cniconfig": {
	I1209 04:26:52.686805 1187425 command_runner.go:130] >     "Networks": [
	I1209 04:26:52.686825 1187425 command_runner.go:130] >       {
	I1209 04:26:52.686864 1187425 command_runner.go:130] >         "Config": {
	I1209 04:26:52.686886 1187425 command_runner.go:130] >           "CNIVersion": "0.3.1",
	I1209 04:26:52.686905 1187425 command_runner.go:130] >           "Name": "cni-loopback",
	I1209 04:26:52.686923 1187425 command_runner.go:130] >           "Plugins": [
	I1209 04:26:52.686940 1187425 command_runner.go:130] >             {
	I1209 04:26:52.686967 1187425 command_runner.go:130] >               "Network": {
	I1209 04:26:52.686991 1187425 command_runner.go:130] >                 "ipam": {},
	I1209 04:26:52.687011 1187425 command_runner.go:130] >                 "type": "loopback"
	I1209 04:26:52.687028 1187425 command_runner.go:130] >               },
	I1209 04:26:52.687048 1187425 command_runner.go:130] >               "Source": "{\"type\":\"loopback\"}"
	I1209 04:26:52.687074 1187425 command_runner.go:130] >             }
	I1209 04:26:52.687097 1187425 command_runner.go:130] >           ],
	I1209 04:26:52.687120 1187425 command_runner.go:130] >           "Source": "{\n\"cniVersion\": \"0.3.1\",\n\"name\": \"cni-loopback\",\n\"plugins\": [{\n  \"type\": \"loopback\"\n}]\n}"
	I1209 04:26:52.687138 1187425 command_runner.go:130] >         },
	I1209 04:26:52.687160 1187425 command_runner.go:130] >         "IFName": "lo"
	I1209 04:26:52.687191 1187425 command_runner.go:130] >       }
	I1209 04:26:52.687207 1187425 command_runner.go:130] >     ],
	I1209 04:26:52.687225 1187425 command_runner.go:130] >     "PluginConfDir": "/etc/cni/net.d",
	I1209 04:26:52.687243 1187425 command_runner.go:130] >     "PluginDirs": [
	I1209 04:26:52.687272 1187425 command_runner.go:130] >       "/opt/cni/bin"
	I1209 04:26:52.687293 1187425 command_runner.go:130] >     ],
	I1209 04:26:52.687317 1187425 command_runner.go:130] >     "PluginMaxConfNum": 1,
	I1209 04:26:52.687334 1187425 command_runner.go:130] >     "Prefix": "eth"
	I1209 04:26:52.687351 1187425 command_runner.go:130] >   },
	I1209 04:26:52.687378 1187425 command_runner.go:130] >   "config": {
	I1209 04:26:52.687401 1187425 command_runner.go:130] >     "cdiSpecDirs": [
	I1209 04:26:52.687418 1187425 command_runner.go:130] >       "/etc/cdi",
	I1209 04:26:52.687438 1187425 command_runner.go:130] >       "/var/run/cdi"
	I1209 04:26:52.687457 1187425 command_runner.go:130] >     ],
	I1209 04:26:52.687483 1187425 command_runner.go:130] >     "cni": {
	I1209 04:26:52.687505 1187425 command_runner.go:130] >       "binDir": "",
	I1209 04:26:52.687560 1187425 command_runner.go:130] >       "binDirs": [
	I1209 04:26:52.687588 1187425 command_runner.go:130] >         "/opt/cni/bin"
	I1209 04:26:52.687609 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.687628 1187425 command_runner.go:130] >       "confDir": "/etc/cni/net.d",
	I1209 04:26:52.687646 1187425 command_runner.go:130] >       "confTemplate": "",
	I1209 04:26:52.687665 1187425 command_runner.go:130] >       "ipPref": "",
	I1209 04:26:52.687692 1187425 command_runner.go:130] >       "maxConfNum": 1,
	I1209 04:26:52.687715 1187425 command_runner.go:130] >       "setupSerially": false,
	I1209 04:26:52.687733 1187425 command_runner.go:130] >       "useInternalLoopback": false
	I1209 04:26:52.687749 1187425 command_runner.go:130] >     },
	I1209 04:26:52.687775 1187425 command_runner.go:130] >     "containerd": {
	I1209 04:26:52.687802 1187425 command_runner.go:130] >       "defaultRuntimeName": "runc",
	I1209 04:26:52.687825 1187425 command_runner.go:130] >       "ignoreBlockIONotEnabledErrors": false,
	I1209 04:26:52.687845 1187425 command_runner.go:130] >       "ignoreRdtNotEnabledErrors": false,
	I1209 04:26:52.687861 1187425 command_runner.go:130] >       "runtimes": {
	I1209 04:26:52.687878 1187425 command_runner.go:130] >         "runc": {
	I1209 04:26:52.687905 1187425 command_runner.go:130] >           "ContainerAnnotations": null,
	I1209 04:26:52.687929 1187425 command_runner.go:130] >           "PodAnnotations": null,
	I1209 04:26:52.687948 1187425 command_runner.go:130] >           "baseRuntimeSpec": "",
	I1209 04:26:52.687965 1187425 command_runner.go:130] >           "cgroupWritable": false,
	I1209 04:26:52.687982 1187425 command_runner.go:130] >           "cniConfDir": "",
	I1209 04:26:52.688009 1187425 command_runner.go:130] >           "cniMaxConfNum": 0,
	I1209 04:26:52.688042 1187425 command_runner.go:130] >           "io_type": "",
	I1209 04:26:52.688055 1187425 command_runner.go:130] >           "options": {
	I1209 04:26:52.688060 1187425 command_runner.go:130] >             "BinaryName": "",
	I1209 04:26:52.688065 1187425 command_runner.go:130] >             "CriuImagePath": "",
	I1209 04:26:52.688070 1187425 command_runner.go:130] >             "CriuWorkPath": "",
	I1209 04:26:52.688078 1187425 command_runner.go:130] >             "IoGid": 0,
	I1209 04:26:52.688082 1187425 command_runner.go:130] >             "IoUid": 0,
	I1209 04:26:52.688086 1187425 command_runner.go:130] >             "NoNewKeyring": false,
	I1209 04:26:52.688093 1187425 command_runner.go:130] >             "Root": "",
	I1209 04:26:52.688097 1187425 command_runner.go:130] >             "ShimCgroup": "",
	I1209 04:26:52.688109 1187425 command_runner.go:130] >             "SystemdCgroup": false
	I1209 04:26:52.688113 1187425 command_runner.go:130] >           },
	I1209 04:26:52.688118 1187425 command_runner.go:130] >           "privileged_without_host_devices": false,
	I1209 04:26:52.688128 1187425 command_runner.go:130] >           "privileged_without_host_devices_all_devices_allowed": false,
	I1209 04:26:52.688138 1187425 command_runner.go:130] >           "runtimePath": "",
	I1209 04:26:52.688145 1187425 command_runner.go:130] >           "runtimeType": "io.containerd.runc.v2",
	I1209 04:26:52.688153 1187425 command_runner.go:130] >           "sandboxer": "podsandbox",
	I1209 04:26:52.688157 1187425 command_runner.go:130] >           "snapshotter": ""
	I1209 04:26:52.688161 1187425 command_runner.go:130] >         }
	I1209 04:26:52.688164 1187425 command_runner.go:130] >       }
	I1209 04:26:52.688167 1187425 command_runner.go:130] >     },
	I1209 04:26:52.688181 1187425 command_runner.go:130] >     "containerdEndpoint": "/run/containerd/containerd.sock",
	I1209 04:26:52.688190 1187425 command_runner.go:130] >     "containerdRootDir": "/var/lib/containerd",
	I1209 04:26:52.688198 1187425 command_runner.go:130] >     "device_ownership_from_security_context": false,
	I1209 04:26:52.688205 1187425 command_runner.go:130] >     "disableApparmor": false,
	I1209 04:26:52.688210 1187425 command_runner.go:130] >     "disableHugetlbController": true,
	I1209 04:26:52.688218 1187425 command_runner.go:130] >     "disableProcMount": false,
	I1209 04:26:52.688223 1187425 command_runner.go:130] >     "drainExecSyncIOTimeout": "0s",
	I1209 04:26:52.688231 1187425 command_runner.go:130] >     "enableCDI": true,
	I1209 04:26:52.688235 1187425 command_runner.go:130] >     "enableSelinux": false,
	I1209 04:26:52.688240 1187425 command_runner.go:130] >     "enableUnprivilegedICMP": true,
	I1209 04:26:52.688248 1187425 command_runner.go:130] >     "enableUnprivilegedPorts": true,
	I1209 04:26:52.688253 1187425 command_runner.go:130] >     "ignoreDeprecationWarnings": null,
	I1209 04:26:52.688259 1187425 command_runner.go:130] >     "ignoreImageDefinedVolumes": false,
	I1209 04:26:52.688269 1187425 command_runner.go:130] >     "maxContainerLogLineSize": 16384,
	I1209 04:26:52.688278 1187425 command_runner.go:130] >     "netnsMountsUnderStateDir": false,
	I1209 04:26:52.688282 1187425 command_runner.go:130] >     "restrictOOMScoreAdj": false,
	I1209 04:26:52.688293 1187425 command_runner.go:130] >     "rootDir": "/var/lib/containerd/io.containerd.grpc.v1.cri",
	I1209 04:26:52.688297 1187425 command_runner.go:130] >     "selinuxCategoryRange": 1024,
	I1209 04:26:52.688306 1187425 command_runner.go:130] >     "stateDir": "/run/containerd/io.containerd.grpc.v1.cri",
	I1209 04:26:52.688312 1187425 command_runner.go:130] >     "tolerateMissingHugetlbController": true,
	I1209 04:26:52.688320 1187425 command_runner.go:130] >     "unsetSeccompProfile": ""
	I1209 04:26:52.688323 1187425 command_runner.go:130] >   },
	I1209 04:26:52.688327 1187425 command_runner.go:130] >   "features": {
	I1209 04:26:52.688332 1187425 command_runner.go:130] >     "supplemental_groups_policy": true
	I1209 04:26:52.688337 1187425 command_runner.go:130] >   },
	I1209 04:26:52.688341 1187425 command_runner.go:130] >   "golang": "go1.24.9",
	I1209 04:26:52.688355 1187425 command_runner.go:130] >   "lastCNILoadStatus": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1209 04:26:52.688368 1187425 command_runner.go:130] >   "lastCNILoadStatus.default": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1209 04:26:52.688376 1187425 command_runner.go:130] >   "runtimeHandlers": [
	I1209 04:26:52.688379 1187425 command_runner.go:130] >     {
	I1209 04:26:52.688388 1187425 command_runner.go:130] >       "features": {
	I1209 04:26:52.688394 1187425 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1209 04:26:52.688403 1187425 command_runner.go:130] >         "user_namespaces": true
	I1209 04:26:52.688406 1187425 command_runner.go:130] >       }
	I1209 04:26:52.688409 1187425 command_runner.go:130] >     },
	I1209 04:26:52.688412 1187425 command_runner.go:130] >     {
	I1209 04:26:52.688416 1187425 command_runner.go:130] >       "features": {
	I1209 04:26:52.688423 1187425 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1209 04:26:52.688432 1187425 command_runner.go:130] >         "user_namespaces": true
	I1209 04:26:52.688435 1187425 command_runner.go:130] >       },
	I1209 04:26:52.688439 1187425 command_runner.go:130] >       "name": "runc"
	I1209 04:26:52.688446 1187425 command_runner.go:130] >     }
	I1209 04:26:52.688449 1187425 command_runner.go:130] >   ],
	I1209 04:26:52.688457 1187425 command_runner.go:130] >   "status": {
	I1209 04:26:52.688461 1187425 command_runner.go:130] >     "conditions": [
	I1209 04:26:52.688469 1187425 command_runner.go:130] >       {
	I1209 04:26:52.688476 1187425 command_runner.go:130] >         "message": "",
	I1209 04:26:52.688484 1187425 command_runner.go:130] >         "reason": "",
	I1209 04:26:52.688488 1187425 command_runner.go:130] >         "status": true,
	I1209 04:26:52.688493 1187425 command_runner.go:130] >         "type": "RuntimeReady"
	I1209 04:26:52.688497 1187425 command_runner.go:130] >       },
	I1209 04:26:52.688502 1187425 command_runner.go:130] >       {
	I1209 04:26:52.688509 1187425 command_runner.go:130] >         "message": "Network plugin returns error: cni plugin not initialized",
	I1209 04:26:52.688518 1187425 command_runner.go:130] >         "reason": "NetworkPluginNotReady",
	I1209 04:26:52.688522 1187425 command_runner.go:130] >         "status": false,
	I1209 04:26:52.688530 1187425 command_runner.go:130] >         "type": "NetworkReady"
	I1209 04:26:52.688534 1187425 command_runner.go:130] >       },
	I1209 04:26:52.688541 1187425 command_runner.go:130] >       {
	I1209 04:26:52.688568 1187425 command_runner.go:130] >         "message": "{\"io.containerd.deprecation/cgroup-v1\":\"The support for cgroup v1 is deprecated since containerd v2.2 and will be removed by no later than May 2029. Upgrade the host to use cgroup v2.\"}",
	I1209 04:26:52.688578 1187425 command_runner.go:130] >         "reason": "ContainerdHasDeprecationWarnings",
	I1209 04:26:52.688584 1187425 command_runner.go:130] >         "status": false,
	I1209 04:26:52.688590 1187425 command_runner.go:130] >         "type": "ContainerdHasNoDeprecationWarnings"
	I1209 04:26:52.688595 1187425 command_runner.go:130] >       }
	I1209 04:26:52.688598 1187425 command_runner.go:130] >     ]
	I1209 04:26:52.688606 1187425 command_runner.go:130] >   }
	I1209 04:26:52.688609 1187425 command_runner.go:130] > }
	I1209 04:26:52.690920 1187425 cni.go:84] Creating CNI manager for ""
	I1209 04:26:52.690942 1187425 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1209 04:26:52.690965 1187425 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1209 04:26:52.690987 1187425 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-667319 NodeName:functional-667319 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1209 04:26:52.691101 1187425 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-667319"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 04:26:52.691179 1187425 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1209 04:26:52.697985 1187425 command_runner.go:130] > kubeadm
	I1209 04:26:52.698006 1187425 command_runner.go:130] > kubectl
	I1209 04:26:52.698010 1187425 command_runner.go:130] > kubelet
	I1209 04:26:52.698825 1187425 binaries.go:51] Found k8s binaries, skipping transfer
	I1209 04:26:52.698896 1187425 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1209 04:26:52.706638 1187425 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1209 04:26:52.718822 1187425 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1209 04:26:52.731825 1187425 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1209 04:26:52.744962 1187425 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1209 04:26:52.748733 1187425 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1209 04:26:52.748987 1187425 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 04:26:52.855986 1187425 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 04:26:53.181367 1187425 certs.go:69] Setting up /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319 for IP: 192.168.49.2
	I1209 04:26:53.181392 1187425 certs.go:195] generating shared ca certs ...
	I1209 04:26:53.181408 1187425 certs.go:227] acquiring lock for ca certs: {Name:mk15788702f8c4e23b5aeab3f44961d296fab259 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 04:26:53.181570 1187425 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.key
	I1209 04:26:53.181618 1187425 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/proxy-client-ca.key
	I1209 04:26:53.181630 1187425 certs.go:257] generating profile certs ...
	I1209 04:26:53.181740 1187425 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/client.key
	I1209 04:26:53.181805 1187425 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/apiserver.key.c80eb595
	I1209 04:26:53.181848 1187425 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/proxy-client.key
	I1209 04:26:53.181859 1187425 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1209 04:26:53.181873 1187425 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1209 04:26:53.181889 1187425 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1142328/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1209 04:26:53.181899 1187425 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1142328/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1209 04:26:53.181914 1187425 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1209 04:26:53.181925 1187425 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1209 04:26:53.181943 1187425 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1209 04:26:53.181954 1187425 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1209 04:26:53.182004 1187425 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/1144231.pem (1338 bytes)
	W1209 04:26:53.182038 1187425 certs.go:480] ignoring /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/1144231_empty.pem, impossibly tiny 0 bytes
	I1209 04:26:53.182050 1187425 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca-key.pem (1679 bytes)
	I1209 04:26:53.182079 1187425 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem (1078 bytes)
	I1209 04:26:53.182105 1187425 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/cert.pem (1123 bytes)
	I1209 04:26:53.182136 1187425 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/key.pem (1675 bytes)
	I1209 04:26:53.182187 1187425 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/ssl/certs/11442312.pem (1708 bytes)
	I1209 04:26:53.182243 1187425 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/ssl/certs/11442312.pem -> /usr/share/ca-certificates/11442312.pem
	I1209 04:26:53.182260 1187425 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1209 04:26:53.182277 1187425 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/1144231.pem -> /usr/share/ca-certificates/1144231.pem
	I1209 04:26:53.182817 1187425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 04:26:53.202751 1187425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1209 04:26:53.220083 1187425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 04:26:53.237728 1187425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1209 04:26:53.255002 1187425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1209 04:26:53.271923 1187425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1209 04:26:53.289401 1187425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 04:26:53.306616 1187425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1209 04:26:53.323564 1187425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/ssl/certs/11442312.pem --> /usr/share/ca-certificates/11442312.pem (1708 bytes)
	I1209 04:26:53.340526 1187425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 04:26:53.357221 1187425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/1144231.pem --> /usr/share/ca-certificates/1144231.pem (1338 bytes)
	I1209 04:26:53.373705 1187425 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 04:26:53.386274 1187425 ssh_runner.go:195] Run: openssl version
	I1209 04:26:53.391826 1187425 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1209 04:26:53.392252 1187425 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/11442312.pem
	I1209 04:26:53.399306 1187425 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/11442312.pem /etc/ssl/certs/11442312.pem
	I1209 04:26:53.406404 1187425 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11442312.pem
	I1209 04:26:53.409862 1187425 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec  9 04:18 /usr/share/ca-certificates/11442312.pem
	I1209 04:26:53.409914 1187425 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 04:18 /usr/share/ca-certificates/11442312.pem
	I1209 04:26:53.409972 1187425 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11442312.pem
	I1209 04:26:53.450109 1187425 command_runner.go:130] > 3ec20f2e
	I1209 04:26:53.450580 1187425 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1209 04:26:53.457724 1187425 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1209 04:26:53.464857 1187425 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1209 04:26:53.472136 1187425 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 04:26:53.475789 1187425 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec  9 04:09 /usr/share/ca-certificates/minikubeCA.pem
	I1209 04:26:53.475830 1187425 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 04:09 /usr/share/ca-certificates/minikubeCA.pem
	I1209 04:26:53.475880 1187425 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 04:26:53.517012 1187425 command_runner.go:130] > b5213941
	I1209 04:26:53.517090 1187425 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1209 04:26:53.524195 1187425 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1144231.pem
	I1209 04:26:53.531059 1187425 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1144231.pem /etc/ssl/certs/1144231.pem
	I1209 04:26:53.537929 1187425 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1144231.pem
	I1209 04:26:53.541362 1187425 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec  9 04:18 /usr/share/ca-certificates/1144231.pem
	I1209 04:26:53.541587 1187425 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 04:18 /usr/share/ca-certificates/1144231.pem
	I1209 04:26:53.541670 1187425 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1144231.pem
	I1209 04:26:53.586134 1187425 command_runner.go:130] > 51391683
	I1209 04:26:53.586694 1187425 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1209 04:26:53.593775 1187425 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 04:26:53.597060 1187425 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 04:26:53.597083 1187425 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1209 04:26:53.597090 1187425 command_runner.go:130] > Device: 259,1	Inode: 1317519     Links: 1
	I1209 04:26:53.597096 1187425 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1209 04:26:53.597101 1187425 command_runner.go:130] > Access: 2025-12-09 04:22:46.557738038 +0000
	I1209 04:26:53.597107 1187425 command_runner.go:130] > Modify: 2025-12-09 04:18:42.397294101 +0000
	I1209 04:26:53.597112 1187425 command_runner.go:130] > Change: 2025-12-09 04:18:42.397294101 +0000
	I1209 04:26:53.597120 1187425 command_runner.go:130] >  Birth: 2025-12-09 04:18:42.397294101 +0000
	I1209 04:26:53.597202 1187425 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1209 04:26:53.637326 1187425 command_runner.go:130] > Certificate will not expire
	I1209 04:26:53.637892 1187425 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1209 04:26:53.678262 1187425 command_runner.go:130] > Certificate will not expire
	I1209 04:26:53.678829 1187425 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1209 04:26:53.719319 1187425 command_runner.go:130] > Certificate will not expire
	I1209 04:26:53.719397 1187425 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1209 04:26:53.760102 1187425 command_runner.go:130] > Certificate will not expire
	I1209 04:26:53.760184 1187425 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1209 04:26:53.805340 1187425 command_runner.go:130] > Certificate will not expire
	I1209 04:26:53.805854 1187425 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1209 04:26:53.846216 1187425 command_runner.go:130] > Certificate will not expire
	I1209 04:26:53.846284 1187425 kubeadm.go:401] StartCluster: {Name:functional-667319 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-667319 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 04:26:53.846701 1187425 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1209 04:26:53.846774 1187425 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 04:26:53.877891 1187425 cri.go:89] found id: ""
	I1209 04:26:53.877982 1187425 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1209 04:26:53.884657 1187425 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1209 04:26:53.884683 1187425 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1209 04:26:53.884690 1187425 command_runner.go:130] > /var/lib/minikube/etcd:
	I1209 04:26:53.885556 1187425 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1209 04:26:53.885572 1187425 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1209 04:26:53.885646 1187425 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1209 04:26:53.892789 1187425 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1209 04:26:53.893171 1187425 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-667319" does not appear in /home/jenkins/minikube-integration/22081-1142328/kubeconfig
	I1209 04:26:53.893275 1187425 kubeconfig.go:62] /home/jenkins/minikube-integration/22081-1142328/kubeconfig needs updating (will repair): [kubeconfig missing "functional-667319" cluster setting kubeconfig missing "functional-667319" context setting]
	I1209 04:26:53.893568 1187425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1142328/kubeconfig: {Name:mk0dc127429f88dc4fdfb2d110deebc58207c1b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 04:26:53.893971 1187425 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22081-1142328/kubeconfig
	I1209 04:26:53.894121 1187425 kapi.go:59] client config for functional-667319: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/client.crt", KeyFile:"/home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/client.key", CAFile:"/home/jenkins/minikube-integration/22081-1142328/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb3ec0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1209 04:26:53.894601 1187425 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1209 04:26:53.894621 1187425 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1209 04:26:53.894627 1187425 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1209 04:26:53.894636 1187425 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1209 04:26:53.894643 1187425 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1209 04:26:53.894942 1187425 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1209 04:26:53.895030 1187425 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1209 04:26:53.902229 1187425 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1209 04:26:53.902301 1187425 kubeadm.go:602] duration metric: took 16.713333ms to restartPrimaryControlPlane
	I1209 04:26:53.902316 1187425 kubeadm.go:403] duration metric: took 56.036306ms to StartCluster
	I1209 04:26:53.902333 1187425 settings.go:142] acquiring lock: {Name:mk8fa744e3d74bf8a1cbf5ac275c9f1969ad91a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 04:26:53.902398 1187425 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22081-1142328/kubeconfig
	I1209 04:26:53.902993 1187425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1142328/kubeconfig: {Name:mk0dc127429f88dc4fdfb2d110deebc58207c1b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 04:26:53.903190 1187425 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1209 04:26:53.903521 1187425 config.go:182] Loaded profile config "functional-667319": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1209 04:26:53.903568 1187425 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1209 04:26:53.903630 1187425 addons.go:70] Setting storage-provisioner=true in profile "functional-667319"
	I1209 04:26:53.903643 1187425 addons.go:239] Setting addon storage-provisioner=true in "functional-667319"
	I1209 04:26:53.903675 1187425 host.go:66] Checking if "functional-667319" exists ...
	I1209 04:26:53.904120 1187425 addons.go:70] Setting default-storageclass=true in profile "functional-667319"
	I1209 04:26:53.904144 1187425 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-667319"
	I1209 04:26:53.904441 1187425 cli_runner.go:164] Run: docker container inspect functional-667319 --format={{.State.Status}}
	I1209 04:26:53.904640 1187425 cli_runner.go:164] Run: docker container inspect functional-667319 --format={{.State.Status}}
	I1209 04:26:53.910201 1187425 out.go:179] * Verifying Kubernetes components...
	I1209 04:26:53.913884 1187425 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 04:26:53.930099 1187425 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 04:26:53.932721 1187425 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22081-1142328/kubeconfig
	I1209 04:26:53.932880 1187425 kapi.go:59] client config for functional-667319: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/client.crt", KeyFile:"/home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/client.key", CAFile:"/home/jenkins/minikube-integration/22081-1142328/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb3ec0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1209 04:26:53.933092 1187425 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:26:53.933105 1187425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1209 04:26:53.933155 1187425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-667319
	I1209 04:26:53.933672 1187425 addons.go:239] Setting addon default-storageclass=true in "functional-667319"
	I1209 04:26:53.933726 1187425 host.go:66] Checking if "functional-667319" exists ...
	I1209 04:26:53.934157 1187425 cli_runner.go:164] Run: docker container inspect functional-667319 --format={{.State.Status}}
	I1209 04:26:53.980209 1187425 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/functional-667319/id_rsa Username:docker}
	I1209 04:26:53.991515 1187425 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1209 04:26:53.991543 1187425 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1209 04:26:53.991606 1187425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-667319
	I1209 04:26:54.014988 1187425 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/functional-667319/id_rsa Username:docker}
	I1209 04:26:54.109673 1187425 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 04:26:54.172299 1187425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1209 04:26:54.172446 1187425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:26:54.932432 1187425 node_ready.go:35] waiting up to 6m0s for node "functional-667319" to be "Ready" ...
	I1209 04:26:54.932477 1187425 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:26:54.932512 1187425 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:26:54.932537 1187425 retry.go:31] will retry after 239.582285ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:26:54.932571 1187425 type.go:168] "Request Body" body=""
	I1209 04:26:54.932584 1187425 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:26:54.932596 1187425 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:26:54.932603 1187425 retry.go:31] will retry after 326.615849ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:26:54.932629 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:26:54.932908 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:26:55.173322 1187425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:26:55.233582 1187425 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:26:55.233631 1187425 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:26:55.233651 1187425 retry.go:31] will retry after 246.357107ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:26:55.259785 1187425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 04:26:55.318382 1187425 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:26:55.318469 1187425 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:26:55.318493 1187425 retry.go:31] will retry after 410.345383ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:26:55.433607 1187425 type.go:168] "Request Body" body=""
	I1209 04:26:55.433683 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:26:55.434019 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:26:55.480272 1187425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:26:55.539370 1187425 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:26:55.543073 1187425 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:26:55.543104 1187425 retry.go:31] will retry after 836.674318ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:26:55.729246 1187425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 04:26:55.790859 1187425 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:26:55.790906 1187425 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:26:55.790952 1187425 retry.go:31] will retry after 634.479833ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:26:55.933159 1187425 type.go:168] "Request Body" body=""
	I1209 04:26:55.933235 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:26:55.933592 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:26:56.380124 1187425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:26:56.425589 1187425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 04:26:56.432912 1187425 type.go:168] "Request Body" body=""
	I1209 04:26:56.433084 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:26:56.433454 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:26:56.462533 1187425 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:26:56.462616 1187425 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:26:56.462643 1187425 retry.go:31] will retry after 603.323732ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:26:56.528272 1187425 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:26:56.528318 1187425 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:26:56.528338 1187425 retry.go:31] will retry after 1.072780189s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:26:56.932753 1187425 type.go:168] "Request Body" body=""
	I1209 04:26:56.932827 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:26:56.933209 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:26:56.933265 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:26:57.066591 1187425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:26:57.132172 1187425 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:26:57.135761 1187425 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:26:57.135793 1187425 retry.go:31] will retry after 1.855495012s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:26:57.433210 1187425 type.go:168] "Request Body" body=""
	I1209 04:26:57.433286 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:26:57.433630 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:26:57.601957 1187425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 04:26:57.657995 1187425 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:26:57.658038 1187425 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:26:57.658057 1187425 retry.go:31] will retry after 1.134842328s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:26:57.933276 1187425 type.go:168] "Request Body" body=""
	I1209 04:26:57.933355 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:26:57.933644 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:26:58.433445 1187425 type.go:168] "Request Body" body=""
	I1209 04:26:58.433533 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:26:58.433853 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:26:58.793130 1187425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 04:26:58.858674 1187425 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:26:58.858714 1187425 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:26:58.858733 1187425 retry.go:31] will retry after 2.746713696s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:26:58.933078 1187425 type.go:168] "Request Body" body=""
	I1209 04:26:58.933157 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:26:58.933497 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:26:58.933557 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:26:58.991692 1187425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:26:59.049214 1187425 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:26:59.052768 1187425 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:26:59.052797 1187425 retry.go:31] will retry after 2.715253433s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:26:59.433202 1187425 type.go:168] "Request Body" body=""
	I1209 04:26:59.433383 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:26:59.433760 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:26:59.932622 1187425 type.go:168] "Request Body" body=""
	I1209 04:26:59.932706 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:26:59.933025 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:00.432716 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:00.432792 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:00.433084 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:00.932666 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:00.932767 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:00.933080 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:01.432721 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:01.432802 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:01.433155 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:27:01.433220 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:27:01.606514 1187425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 04:27:01.664108 1187425 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:27:01.667800 1187425 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:27:01.667831 1187425 retry.go:31] will retry after 3.567848129s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:27:01.769041 1187425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:27:01.828356 1187425 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:27:01.831855 1187425 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:27:01.831890 1187425 retry.go:31] will retry after 1.487712174s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:27:01.933283 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:01.933357 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:01.933696 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:02.433227 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:02.433296 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:02.433566 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:02.933365 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:02.933446 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:02.933784 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:03.320437 1187425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:27:03.380650 1187425 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:27:03.380689 1187425 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:27:03.380707 1187425 retry.go:31] will retry after 2.980491619s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:27:03.432967 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:03.433052 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:03.433335 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:27:03.433382 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:27:03.933173 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:03.933261 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:03.933564 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:04.433334 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:04.433407 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:04.433774 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:04.932608 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:04.932706 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:04.932991 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:05.236581 1187425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 04:27:05.294920 1187425 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:27:05.298256 1187425 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:27:05.298287 1187425 retry.go:31] will retry after 3.775902085s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:27:05.433544 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:05.433623 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:05.433911 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:27:05.433968 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:27:05.932633 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:05.932732 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:05.933097 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:06.361776 1187425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:27:06.423571 1187425 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:27:06.423609 1187425 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:27:06.423628 1187425 retry.go:31] will retry after 5.55631863s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:27:06.432687 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:06.432759 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:06.433064 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:06.932763 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:06.932858 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:06.933188 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:07.432720 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:07.432798 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:07.433122 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:07.932712 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:07.932788 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:07.933143 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:27:07.933270 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:27:08.432753 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:08.432826 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:08.433121 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:08.932708 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:08.932789 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:08.933114 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:09.074480 1187425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 04:27:09.131213 1187425 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:27:09.134642 1187425 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:27:09.134677 1187425 retry.go:31] will retry after 3.336397846s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:27:09.433063 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:09.433136 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:09.433477 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:09.933147 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:09.933243 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:09.933515 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:27:09.933565 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:27:10.433463 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:10.433543 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:10.433860 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:10.933720 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:10.933792 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:10.934110 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:11.432758 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:11.432831 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:11.433103 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:11.932702 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:11.932775 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:11.933127 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:11.980489 1187425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:27:12.042917 1187425 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:27:12.047245 1187425 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:27:12.047276 1187425 retry.go:31] will retry after 4.846358398s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:27:12.432667 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:12.432737 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:12.433027 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:27:12.433074 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:27:12.471387 1187425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 04:27:12.533451 1187425 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:27:12.533488 1187425 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:27:12.533508 1187425 retry.go:31] will retry after 12.396608004s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:27:12.932956 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:12.933031 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:12.933353 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:13.432721 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:13.432794 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:13.433126 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:13.932935 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:13.933007 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:13.933342 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:14.432734 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:14.432802 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:14.433056 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:27:14.433098 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:27:14.932653 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:14.932768 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:14.933061 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:15.432698 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:15.432796 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:15.433182 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:15.932668 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:15.932746 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:15.933050 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:16.432712 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:16.432788 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:16.433123 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:27:16.433176 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:27:16.894794 1187425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:27:16.933270 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:16.933350 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:16.933633 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:16.956237 1187425 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:27:16.956277 1187425 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:27:16.956299 1187425 retry.go:31] will retry after 11.708634593s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:27:17.432723 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:17.432798 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:17.433065 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:17.932740 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:17.932815 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:17.933136 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:18.432860 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:18.432932 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:18.433214 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:27:18.433267 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:27:18.932660 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:18.932728 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:18.933009 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:19.432696 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:19.432772 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:19.433147 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:19.932674 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:19.932750 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:19.933101 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:20.432907 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:20.432984 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:20.433236 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:20.932684 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:20.932760 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:20.933100 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:27:20.933152 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:27:21.432797 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:21.432871 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:21.433197 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:21.932637 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:21.932726 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:21.932993 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:22.432707 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:22.432782 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:22.433117 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:22.932841 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:22.932917 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:22.933234 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:27:22.933291 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:27:23.432668 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:23.432751 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:23.433027 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:23.932873 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:23.932948 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:23.933315 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:24.432680 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:24.432753 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:24.433071 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:24.930697 1187425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 04:27:24.933014 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:24.933088 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:24.933320 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:27:24.933369 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:27:25.005568 1187425 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:27:25.005627 1187425 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:27:25.005648 1187425 retry.go:31] will retry after 8.82909482s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:27:25.433152 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:25.433233 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:25.433532 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:25.932972 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:25.933044 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:25.933358 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:26.432756 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:26.432830 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:26.433108 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:26.932726 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:26.932803 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:26.933099 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:27.432693 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:27.432765 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:27.433082 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:27:27.433136 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:27:27.932636 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:27.932712 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:27.933033 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:28.432693 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:28.432767 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:28.433092 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:28.665515 1187425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:27:28.738878 1187425 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:27:28.745399 1187425 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:27:28.745439 1187425 retry.go:31] will retry after 17.60519501s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:27:28.932773 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:28.932863 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:28.933172 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:29.432651 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:29.432722 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:29.432984 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:29.932660 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:29.932732 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:29.933044 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:27:29.933094 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:27:30.432735 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:30.432809 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:30.433166 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:30.932654 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:30.932753 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:30.933041 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:31.432690 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:31.432771 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:31.433110 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:31.932741 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:31.932815 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:31.933152 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:27:31.933206 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:27:32.432841 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:32.432914 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:32.433177 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:32.932689 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:32.932763 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:32.933056 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:33.432759 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:33.432858 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:33.433217 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:33.835821 1187425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 04:27:33.901341 1187425 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:27:33.901393 1187425 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:27:33.901417 1187425 retry.go:31] will retry after 15.074885047s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:27:33.933523 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:33.933593 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:33.933865 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:27:33.933909 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:27:34.433650 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:34.433727 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:34.434057 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:34.933020 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:34.933101 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:34.933420 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:35.433095 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:35.433165 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:35.433445 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:35.933243 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:35.933325 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:35.933633 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:36.433407 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:36.433483 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:36.433826 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:27:36.433882 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:27:36.933227 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:36.933299 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:36.933563 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:37.433288 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:37.433419 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:37.433790 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:37.933592 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:37.933667 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:37.934021 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:38.432659 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:38.432729 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:38.433014 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:38.932721 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:38.932798 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:38.933137 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:27:38.933190 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:27:39.432858 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:39.432933 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:39.433235 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:39.932589 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:39.932669 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:39.932951 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:40.432701 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:40.432786 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:40.433116 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:40.932716 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:40.932797 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:40.933091 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:41.432779 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:41.432846 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:41.433142 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:27:41.433204 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:27:41.932681 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:41.932757 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:41.933101 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:42.432844 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:42.432919 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:42.433290 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:42.932967 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:42.933038 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:42.933352 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:43.432697 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:43.432812 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:43.433136 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:43.933033 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:43.933129 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:43.933472 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:27:43.933526 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:27:44.433250 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:44.433328 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:44.433660 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:44.933653 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:44.933724 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:44.934068 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:45.432640 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:45.432721 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:45.433020 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:45.932669 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:45.932752 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:45.933159 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:46.350898 1187425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:27:46.406595 1187425 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:27:46.409949 1187425 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:27:46.409981 1187425 retry.go:31] will retry after 30.377142014s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:27:46.433127 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:46.433197 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:46.433514 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:27:46.433571 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:27:46.933101 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:46.933177 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:46.933501 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:47.433170 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:47.433241 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:47.433507 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:47.932770 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:47.932843 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:47.933174 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:48.432886 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:48.432966 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:48.433255 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:48.932660 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:48.932727 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:48.933049 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:27:48.933100 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:27:48.977251 1187425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 04:27:49.036457 1187425 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:27:49.036497 1187425 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:27:49.036517 1187425 retry.go:31] will retry after 20.293703248s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:27:49.432687 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:49.432763 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:49.433127 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:49.932861 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:49.932933 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:49.933269 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:50.433588 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:50.433662 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:50.433924 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:50.932670 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:50.932744 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:50.933080 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:27:50.933141 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:27:51.432801 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:51.432888 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:51.433180 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:51.932877 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:51.932959 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:51.933270 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:52.432704 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:52.432782 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:52.433138 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:52.932700 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:52.932780 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:52.933082 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:53.432653 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:53.432725 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:53.433037 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:27:53.433089 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:27:53.932975 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:53.933048 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:53.933385 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:54.432710 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:54.432790 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:54.433145 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:54.932877 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:54.932952 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:54.933240 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:55.432707 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:55.432782 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:55.433125 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:27:55.433191 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:27:55.932861 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:55.932943 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:55.933270 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:56.432680 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:56.432756 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:56.433029 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:56.932719 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:56.932801 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:56.933134 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:57.432697 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:57.432773 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:57.433096 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:57.932659 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:57.932729 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:57.933033 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:27:57.933082 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:27:58.432760 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:58.432832 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:58.433186 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:58.932893 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:58.932974 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:58.933286 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:59.432662 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:59.432732 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:59.433040 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:59.932639 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:59.932712 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:59.933039 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:00.432720 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:00.432811 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:00.433208 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:28:00.433277 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:28:00.932672 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:00.932741 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:00.933005 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:01.432725 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:01.432807 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:01.433146 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:01.932896 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:01.932975 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:01.933314 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:02.432655 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:02.432728 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:02.433016 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:02.932700 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:02.932781 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:02.933135 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:28:02.933190 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:28:03.432857 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:03.432934 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:03.433286 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:03.933001 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:03.933068 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:03.933321 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:04.432726 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:04.432801 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:04.433094 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:04.932617 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:04.932698 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:04.933036 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:05.432707 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:05.432788 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:05.433060 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:28:05.433107 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:28:05.932743 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:05.932818 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:05.933156 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:06.432696 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:06.432777 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:06.433116 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:06.933451 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:06.933527 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:06.933789 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:07.433539 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:07.433615 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:07.433955 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:28:07.434011 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:28:07.933609 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:07.933684 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:07.934024 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:08.432650 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:08.432722 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:08.433067 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:08.932695 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:08.932767 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:08.933107 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:09.330698 1187425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 04:28:09.392626 1187425 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:28:09.392671 1187425 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:28:09.392765 1187425 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1209 04:28:09.432874 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:09.432952 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:09.433232 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:09.932653 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:09.932723 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:09.932991 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:28:09.933037 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:28:10.432663 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:10.432757 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:10.433041 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:10.932700 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:10.932793 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:10.933076 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:11.433216 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:11.433303 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:11.433575 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:11.933330 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:11.933412 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:11.933748 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:28:11.933801 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:28:12.433587 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:12.433670 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:12.434027 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:12.932705 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:12.932772 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:12.933018 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:13.432706 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:13.432787 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:13.433119 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:13.932999 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:13.933099 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:13.933392 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:14.432657 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:14.432736 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:14.433055 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:28:14.433109 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:28:14.932664 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:14.932748 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:14.933036 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:15.432674 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:15.432750 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:15.433098 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:15.932771 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:15.932842 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:15.933137 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:16.432714 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:16.432788 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:16.433087 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:28:16.433135 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:28:16.787371 1187425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:28:16.844461 1187425 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:28:16.844502 1187425 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:28:16.844590 1187425 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1209 04:28:16.849226 1187425 out.go:179] * Enabled addons: 
	I1209 04:28:16.852870 1187425 addons.go:530] duration metric: took 1m22.949297316s for enable addons: enabled=[]
	I1209 04:28:16.932633 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:16.932724 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:16.933045 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:17.432667 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:17.432732 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:17.433031 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:17.932701 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:17.932788 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:17.933067 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:18.432770 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:18.432843 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:18.433126 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:28:18.433178 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:28:18.932677 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:18.932752 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:18.932995 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:19.432687 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:19.432781 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:19.433100 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:19.932854 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:19.932926 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:19.933256 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:20.433039 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:20.433107 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:20.433386 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:28:20.433429 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:28:20.933212 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:20.933282 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:20.933581 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:21.433349 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:21.433421 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:21.433766 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:21.933219 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:21.933285 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:21.933576 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:22.433203 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:22.433273 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:22.433621 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:28:22.433676 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:28:22.933451 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:22.933536 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:22.933840 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:23.433217 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:23.433287 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:23.433546 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:23.933612 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:23.933689 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:23.934050 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:24.432755 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:24.432836 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:24.433161 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:24.932932 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:24.933008 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:24.933276 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:28:24.933327 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:28:25.432973 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:25.433049 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:25.433379 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:25.933099 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:25.933181 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:25.933530 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:26.433216 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:26.433283 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:26.433547 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:26.933320 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:26.933401 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:26.933762 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:28:26.933818 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:28:27.433589 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:27.433667 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:27.434004 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:27.932649 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:27.932724 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:27.933001 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:28.432685 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:28.432757 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:28.433490 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:28.933280 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:28.933359 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:28.933693 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:29.433205 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:29.433272 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:29.433545 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:28:29.433592 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:28:29.933575 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:29.933655 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:29.933979 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:30.432667 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:30.432747 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:30.433044 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:30.932681 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:30.932752 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:30.933046 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:31.432688 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:31.432771 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:31.433098 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:31.932806 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:31.932880 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:31.933203 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:28:31.933259 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:28:32.432774 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:32.432849 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:32.433097 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:32.932695 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:32.932765 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:32.933078 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:33.432694 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:33.432776 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:33.433090 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:33.932980 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:33.933051 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:33.933310 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:28:33.933359 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:28:34.432667 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:34.432745 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:34.433055 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:34.932949 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:34.933032 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:34.933356 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:35.433019 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:35.433096 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:35.433526 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:35.933390 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:35.933466 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:35.933812 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:28:35.933870 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:28:36.433595 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:36.433676 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:36.433996 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:36.932657 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:36.932727 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:36.933025 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:37.432697 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:37.432776 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:37.433068 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:37.932702 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:37.932780 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:37.933143 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:38.432646 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:38.432727 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:38.433055 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:28:38.433106 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:28:38.932743 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:38.932816 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:38.933130 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:39.432847 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:39.432919 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:39.433263 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:39.933046 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:39.933114 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:39.933379 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:40.432710 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:40.432783 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:40.433129 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:28:40.433184 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:28:40.932927 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:40.933008 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:40.933371 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:41.432689 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:41.432758 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:41.433014 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:41.932710 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:41.932795 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:41.933094 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:42.432689 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:42.432808 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:42.433149 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:28:42.433204 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:28:42.932862 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:42.932928 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:42.933226 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:43.432918 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:43.432995 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:43.433361 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:43.933127 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:43.933204 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:43.933534 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:44.433220 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:44.433305 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:44.433609 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:28:44.433661 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:28:44.933573 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:44.933652 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:44.933989 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:45.432671 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:45.432808 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:45.433150 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:45.932712 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:45.932784 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:45.933049 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:46.432736 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:46.432815 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:46.433149 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:46.932701 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:46.932779 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:46.933073 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:28:46.933121 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:28:47.432739 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:47.432826 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:47.433130 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:47.932695 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:47.932765 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:47.933076 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:48.432672 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:48.432745 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:48.433062 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:48.932639 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:48.932746 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:48.933042 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:49.432707 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:49.432782 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:49.433123 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:28:49.433177 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:28:49.932922 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:49.932995 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:49.933579 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:50.433185 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:50.433253 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:50.433517 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:50.933391 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:50.933468 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:50.933797 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:51.433551 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:51.433624 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:51.433934 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:28:51.433990 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:28:51.933180 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:51.933283 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:51.933542 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:52.433358 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:52.433437 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:52.433756 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:52.933478 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:52.933559 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:52.933900 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:53.433153 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:53.433229 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:53.433491 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:53.932707 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:53.932895 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:53.933271 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:28:53.933325 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:28:54.432706 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:54.432787 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:54.433082 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:54.933635 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:54.933745 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:54.934087 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:55.432697 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:55.432773 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:55.433110 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:55.932879 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:55.932954 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:55.933290 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:28:55.933359 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:28:56.432868 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:56.432941 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:56.433305 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:56.932702 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:56.932781 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:56.933131 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:57.432846 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:57.432925 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:57.433270 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:57.932659 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:57.932734 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:57.933027 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:58.432714 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:58.432792 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:58.433128 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:28:58.433197 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:28:58.932868 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:58.932944 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:58.933265 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:59.432668 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:59.432735 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:59.432989 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:59.932616 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:59.932707 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:59.933033 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:00.432720 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:00.432802 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:00.433159 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:29:00.433228 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:29:00.932660 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:00.932731 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:00.933053 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:01.432716 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:01.432794 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:01.433137 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:01.932702 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:01.932776 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:01.933098 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:02.432775 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:02.432843 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:02.433108 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:02.932791 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:02.932873 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:02.933214 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:29:02.933284 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:29:03.432715 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:03.432795 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:03.433113 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:03.933003 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:03.933076 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:03.933364 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:04.432671 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:04.432749 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:04.433066 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:04.932620 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:04.932694 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:04.933013 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:05.432723 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:05.432790 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:05.433120 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:29:05.433184 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:29:05.932842 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:05.932925 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:05.933228 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:06.432721 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:06.432798 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:06.433119 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:06.932673 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:06.932758 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:06.933065 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:07.432690 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:07.432761 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:07.433037 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:07.932687 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:07.932769 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:07.933108 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:29:07.933164 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:29:08.432792 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:08.432858 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:08.433117 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:08.932787 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:08.932863 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:08.933157 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:09.432693 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:09.432764 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:09.433055 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:09.932595 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:09.932672 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:09.932942 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:10.432649 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:10.432719 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:10.433035 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:29:10.433090 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:29:10.932796 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:10.932869 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:10.933200 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:11.432701 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:11.432802 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:11.433137 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:11.932772 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:11.932869 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:11.933219 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:12.432724 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:12.432804 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:12.433120 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:29:12.433175 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:29:12.932648 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:12.932732 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:12.933021 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:13.432608 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:13.432696 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:13.432999 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:13.932923 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:13.932996 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:13.933301 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:14.433005 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:14.433076 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:14.433349 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:29:14.433392 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:29:14.933312 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:14.933390 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:14.933705 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:15.433476 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:15.433554 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:15.433865 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:15.933207 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:15.933288 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:15.933572 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:16.433400 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:16.433476 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:16.433794 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:29:16.433849 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:29:16.933242 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:16.933322 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:16.933648 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:17.433213 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:17.433292 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:17.433548 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:17.933339 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:17.933416 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:17.933707 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:18.433434 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:18.433516 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:18.433853 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:29:18.433907 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:29:18.933184 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:18.933260 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:18.933504 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:19.433298 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:19.433371 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:19.433705 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:19.933618 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:19.933716 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:19.934086 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:20.432647 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:20.432722 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:20.433052 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:20.932730 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:20.932801 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:20.933102 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:29:20.933155 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:29:21.432687 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:21.432769 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:21.433095 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:21.932654 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:21.932755 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:21.933080 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:22.432734 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:22.432823 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:22.433185 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:22.932923 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:22.933014 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:22.933448 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:29:22.933504 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:29:23.433275 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:23.433350 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:23.433652 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:23.933627 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:23.933712 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:23.934033 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:24.432724 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:24.432802 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:24.433135 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:24.932905 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:24.932975 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:24.933297 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:25.432701 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:25.432775 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:25.433100 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:29:25.433159 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:29:25.932854 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:25.932931 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:25.933286 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:26.432982 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:26.433053 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:26.433514 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:26.933295 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:26.933368 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:26.933684 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:27.433488 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:27.433566 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:27.433940 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:29:27.434009 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:29:27.932662 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:27.932734 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:27.933007 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:28.432710 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:28.432783 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:28.433097 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:28.932665 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:28.932744 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:28.933074 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:29.432741 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:29.432816 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:29.433060 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:29.932619 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:29.932701 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:29.933015 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:29:29.933073 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:29:30.432700 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:30.432780 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:30.433106 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:30.932656 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:30.932728 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:30.932982 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:31.432616 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:31.432689 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:31.433009 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:31.932733 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:31.932812 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:31.933149 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:29:31.933201 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:29:32.432841 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:32.432914 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:32.433166 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:32.932705 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:32.932783 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:32.933123 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:33.432882 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:33.432957 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:33.433297 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:33.933053 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:33.933130 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:33.933467 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:29:33.933520 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:29:34.433286 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:34.433403 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:34.433746 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:34.932612 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:34.932683 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:34.933012 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:35.433252 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:35.433331 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:35.433606 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:35.933376 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:35.933452 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:35.933778 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:29:35.933826 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:29:36.433423 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:36.433498 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:36.433798 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:36.933229 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:36.933302 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:36.933556 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:37.433365 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:37.433445 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:37.433756 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:37.933531 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:37.933605 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:37.933936 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:29:37.933989 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:29:38.433192 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:38.433264 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:38.433514 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:38.933271 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:38.933344 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:38.933634 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:39.433290 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:39.433366 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:39.433709 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:39.933513 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:39.933582 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:39.933833 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:40.433616 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:40.433692 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:40.433987 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:29:40.434034 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:29:40.933257 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:40.933329 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:40.933667 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:41.433181 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:41.433267 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:41.433577 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:41.933367 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:41.933449 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:41.933797 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:42.433613 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:42.433687 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:42.434049 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:29:42.434129 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:29:42.932643 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:42.932715 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:42.932992 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:43.432683 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:43.432763 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:43.433076 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:43.933013 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:43.933091 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:43.933427 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:44.433227 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:44.433298 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:44.433550 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:44.933559 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:44.933633 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:44.933965 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:29:44.934020 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:29:45.432683 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:45.432761 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:45.433112 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:45.932794 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:45.932869 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:45.933154 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:46.432857 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:46.432932 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:46.433290 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:46.932721 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:46.932795 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:46.933140 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:47.432683 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:47.432767 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:47.433040 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:29:47.433081 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:29:47.932708 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:47.932784 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:47.933102 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:48.432814 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:48.432891 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:48.433180 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:48.932663 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:48.932737 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:48.932981 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:49.432666 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:49.432752 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:49.433069 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:29:49.433125 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:29:49.933001 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:49.933077 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:49.933427 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:50.433227 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:50.433297 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:50.433549 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:50.933296 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:50.933377 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:50.933694 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:51.433469 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:51.433544 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:51.433881 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:29:51.433935 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:29:51.933202 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:51.933269 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:51.933538 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:52.433290 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:52.433364 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:52.433675 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:52.933484 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:52.933559 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:52.933885 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:53.433239 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:53.433315 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:53.433579 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:53.933658 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:53.933737 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:53.934052 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:29:53.934108 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:29:54.432705 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:54.432789 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:54.433124 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:54.932891 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:54.932964 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:54.933236 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:55.432891 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:55.432964 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:55.433304 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:55.932875 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:55.932972 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:55.933352 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:56.433012 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:56.433094 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:56.433401 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:29:56.433443 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:29:56.932693 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:56.932777 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:56.933108 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:57.432826 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:57.432902 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:57.433221 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:57.932682 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:57.932755 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:57.933028 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:58.432718 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:58.432793 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:58.433140 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:58.932716 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:58.932793 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:58.933105 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:29:58.933168 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:29:59.432822 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:59.432892 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:59.433194 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:59.933078 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:59.933150 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:59.933487 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:00.435081 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:00.435162 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:00.435476 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:00.933357 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:00.933452 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:00.933844 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:30:00.933904 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:30:01.433579 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:01.433688 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:01.434089 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:01.932814 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:01.932889 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:01.933149 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:02.432696 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:02.432770 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:02.433104 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:02.932820 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:02.932900 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:02.933272 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:03.432924 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:03.433018 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:03.433394 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:30:03.433446 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:30:03.933058 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:03.933138 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:03.933450 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:04.433250 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:04.433365 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:04.433699 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:04.933501 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:04.933567 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:04.933823 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:05.433624 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:05.433703 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:05.434043 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:30:05.434098 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:30:05.932702 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:05.932780 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:05.933132 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:06.432812 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:06.432885 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:06.433207 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:06.932725 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:06.932803 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:06.933164 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:07.432877 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:07.432965 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:07.433368 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:07.932664 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:07.932739 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:07.933045 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:30:07.933095 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:30:08.432746 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:08.432832 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:08.433233 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:08.932787 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:08.932862 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:08.933220 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:09.432722 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:09.432792 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:09.433075 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:09.932940 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:09.933018 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:09.933383 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:30:09.933446 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:30:10.432697 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:10.432775 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:10.433080 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:10.932767 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:10.932837 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:10.933122 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:11.432711 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:11.432785 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:11.433139 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:11.932864 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:11.932943 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:11.933292 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:12.432972 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:12.433050 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:12.433319 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:30:12.433362 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:30:12.932693 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:12.932770 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:12.933130 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:13.432817 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:13.432891 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:13.433211 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:13.932954 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:13.933023 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:13.933298 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:14.432961 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:14.433039 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:14.433383 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:30:14.433439 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:30:14.933212 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:14.933286 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:14.933615 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:15.433214 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:15.433283 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:15.433537 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:15.933372 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:15.933448 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:15.933750 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:16.433525 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:16.433604 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:16.433977 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:30:16.434106 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:30:16.932772 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:16.932839 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:16.933100 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:17.432712 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:17.432793 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:17.433089 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:17.932769 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:17.932849 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:17.933173 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:18.432930 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:18.432998 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:18.433257 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:18.932950 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:18.933025 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:18.933372 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:30:18.933434 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:30:19.432919 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:19.433009 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:19.433344 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:19.933155 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:19.933227 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:19.933491 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:20.433362 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:20.433448 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:20.433795 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:20.933260 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:20.933344 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:20.933670 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:30:20.933726 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:30:21.433173 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:21.433246 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:21.433511 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:21.933299 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:21.933379 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:21.933716 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:22.433492 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:22.433570 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:22.433867 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:22.933284 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:22.933366 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:22.933654 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:23.433368 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:23.433438 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:23.433760 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:30:23.433812 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:30:23.933593 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:23.933675 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:23.933994 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:24.432661 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:24.432729 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:24.432981 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:24.932865 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:24.932938 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:24.933361 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:25.432685 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:25.432763 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:25.433120 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:25.932767 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:25.932839 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:25.933140 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:30:25.933197 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:30:26.432718 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:26.432796 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:26.433197 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:26.932889 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:26.932990 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:26.933317 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:27.432657 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:27.432727 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:27.433032 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:27.932724 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:27.932798 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:27.933135 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:28.432696 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:28.432772 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:28.433073 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:30:28.433121 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:30:28.932660 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:28.932735 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:28.933045 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:29.432714 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:29.432786 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:29.433158 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:29.932918 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:29.933000 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:29.933354 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:30.432667 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:30.432740 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:30.433039 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:30.932757 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:30.932838 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:30.933183 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:30:30.933239 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:30:31.432899 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:31.432979 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:31.433354 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:31.933050 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:31.933119 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:31.933461 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:32.433235 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:32.433315 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:32.433644 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:32.933442 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:32.933524 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:32.933825 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:30:32.933872 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:30:33.433233 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:33.433304 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:33.433591 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:33.933547 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:33.933627 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:33.933938 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:34.432679 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:34.432763 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:34.433108 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:34.933586 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:34.933660 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:34.933905 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:30:34.933945 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:30:35.432656 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:35.432733 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:35.433079 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:35.932801 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:35.932887 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:35.933268 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:36.432736 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:36.432805 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:36.433059 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:36.932731 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:36.932806 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:36.933156 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:37.432867 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:37.432942 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:37.433311 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:30:37.433368 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:30:37.932650 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:37.932720 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:37.932998 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:38.432684 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:38.432763 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:38.433055 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:38.932741 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:38.932818 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:38.933136 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:39.432679 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:39.432748 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:39.433040 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:39.932790 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:39.932869 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:39.933219 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:30:39.933279 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:30:40.432703 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:40.432777 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:40.433111 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:40.932641 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:40.932707 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:40.932957 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:41.432666 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:41.432744 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:41.433069 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:41.932847 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:41.932929 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:41.933224 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:42.432889 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:42.432958 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:42.433265 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:30:42.433309 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:30:42.932708 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:42.932789 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:42.933126 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:43.432820 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:43.432902 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:43.433230 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:43.933144 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:43.933213 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:43.933465 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:44.433223 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:44.433300 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:44.433652 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:30:44.433704 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:30:44.933589 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:44.933670 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:44.934005 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:45.432690 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:45.432762 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:45.433007 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:45.932747 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:45.932822 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:45.933163 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:46.432880 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:46.432953 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:46.433265 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:46.932662 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:46.932736 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:46.933048 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:30:46.933099 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:30:47.432707 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:47.432797 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:47.433190 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:47.932887 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:47.932971 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:47.933316 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:48.432720 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:48.432792 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:48.433100 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:48.932688 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:48.932768 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:48.933088 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:30:48.933148 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:30:49.432733 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:49.432809 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:49.433125 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:49.933000 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:49.933071 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:49.933338 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:50.433013 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:50.433086 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:50.433573 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:50.933345 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:50.933421 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:50.933709 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:30:50.933750 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:30:51.433232 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:51.433307 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:51.433630 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:51.933396 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:51.933477 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:51.933822 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:52.433445 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:52.433526 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:52.433848 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:52.933226 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:52.933298 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:52.933562 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:53.433320 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:53.433394 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:53.433724 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:30:53.433778 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:30:53.932930 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:53.933016 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:53.933473 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:54.433004 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:54.433155 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:54.433480 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:54.933346 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:54.933427 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:54.933751 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:55.433491 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:55.433571 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:55.433940 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:30:55.434008 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:30:55.933242 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:55.933327 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:55.933662 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:56.433447 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:56.433527 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:56.433865 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:56.933651 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:56.933744 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:56.934082 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:57.432792 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:57.432864 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:57.433162 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:57.932683 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:57.932753 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:57.933114 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:30:57.933173 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:30:58.432860 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:58.432937 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:58.433264 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:58.932676 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:58.932748 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:58.932997 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:59.432729 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:59.432808 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:59.433150 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:59.933081 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:59.933159 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:59.933480 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:30:59.933530 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:31:00.433233 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:00.433315 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:00.433580 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:00.933316 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:00.933394 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:00.933727 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:01.433533 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:01.433611 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:01.433948 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:01.933228 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:01.933301 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:01.933558 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:31:01.933611 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:31:02.433377 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:02.433451 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:02.433800 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:02.933601 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:02.933680 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:02.933967 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:03.432648 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:03.432726 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:03.432986 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:03.932952 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:03.933038 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:03.933395 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:04.433141 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:04.433218 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:04.433526 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:31:04.433581 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:31:04.933489 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:04.933558 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:04.933807 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:05.433605 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:05.433678 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:05.434011 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:05.932712 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:05.932791 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:05.933127 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:06.432820 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:06.432900 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:06.433220 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:06.932716 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:06.932796 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:06.933135 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:31:06.933230 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:31:07.432675 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:07.432749 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:07.433059 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:07.932657 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:07.932734 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:07.933058 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:08.432730 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:08.432806 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:08.433103 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:08.932714 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:08.932789 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:08.933128 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:09.432655 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:09.432733 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:09.432994 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:31:09.433050 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:31:09.932911 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:09.932991 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:09.933336 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:10.432707 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:10.432787 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:10.433102 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:10.932665 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:10.932738 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:10.933044 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:11.432740 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:11.432823 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:11.433154 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:31:11.433214 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:31:11.932885 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:11.932968 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:11.933325 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:12.432666 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:12.432738 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:12.433048 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:12.932727 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:12.932804 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:12.933136 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:13.432857 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:13.432936 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:13.433268 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:31:13.433318 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:31:13.933237 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:13.933317 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:13.933599 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:14.433349 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:14.433424 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:14.433772 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:14.933659 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:14.933736 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:14.934065 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:15.432670 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:15.432747 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:15.433015 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:15.932715 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:15.932801 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:15.933127 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:31:15.933183 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:31:16.432689 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:16.432770 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:16.433108 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:16.932805 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:16.932881 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:16.933165 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:17.432843 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:17.432921 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:17.433248 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:17.932974 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:17.933055 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:17.933357 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:31:17.933406 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:31:18.432810 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:18.432881 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:18.433142 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:18.932706 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:18.932778 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:18.933130 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:19.432699 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:19.432777 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:19.433122 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:19.932861 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:19.932936 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:19.933225 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:20.432912 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:20.432990 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:20.433312 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:31:20.433361 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:31:20.933001 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:20.933084 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:20.933413 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:21.432656 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:21.432769 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:21.433112 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:21.932708 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:21.932788 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:21.933119 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:22.432682 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:22.432760 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:22.433128 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:22.932684 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:22.932752 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:22.932998 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:31:22.933038 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:31:23.432679 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:23.432761 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:23.433116 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:23.932894 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:23.932973 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:23.933311 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:24.432655 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:24.432728 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:24.432998 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:24.932905 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:24.932983 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:24.933347 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:31:24.933403 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:31:25.432684 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:25.432758 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:25.433091 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:25.932662 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:25.932737 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:25.933053 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:26.432697 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:26.432782 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:26.433111 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:26.932702 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:26.932782 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:26.933089 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:27.432642 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:27.432721 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:27.432985 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:31:27.433025 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:31:27.932736 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:27.932813 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:27.933163 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:28.432736 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:28.432812 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:28.433107 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:28.932662 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:28.932730 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:28.933022 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:29.432735 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:29.432813 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:29.433101 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:31:29.433149 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:31:29.932650 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:29.932724 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:29.933059 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:30.432740 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:30.432807 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:30.433108 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:30.932710 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:30.932784 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:30.933148 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:31.432857 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:31.432937 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:31.433271 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:31:31.433325 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:31:31.932657 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:31.932730 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:31.933052 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:32.432705 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:32.432797 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:32.433154 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:32.932867 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:32.932945 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:32.933298 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:33.433002 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:33.433120 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:33.433453 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:31:33.433504 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:31:33.933313 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:33.933388 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:33.933720 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:34.432992 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:34.433115 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:34.433477 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:34.933299 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:34.933372 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:34.933678 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:35.433472 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:35.433550 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:35.433863 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:31:35.433925 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:31:35.932642 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:35.932726 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:35.933082 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:36.432718 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:36.432804 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:36.433204 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:36.932932 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:36.933006 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:36.933324 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:37.432701 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:37.432781 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:37.433098 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:37.932655 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:37.932744 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:37.933007 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:31:37.933067 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:31:38.432684 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:38.432762 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:38.433096 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:38.932741 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:38.932818 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:38.933151 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:39.432752 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:39.432821 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:39.433106 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:39.933656 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:39.933728 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:39.933989 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:31:39.934033 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:31:40.432680 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:40.432765 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:40.433112 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:40.932669 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:40.932734 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:40.933053 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:41.432690 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:41.432780 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:41.433200 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:41.932877 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:41.932953 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:41.933290 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:42.432904 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:42.432982 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:42.433302 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:31:42.433355 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:31:42.932706 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:42.932798 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:42.933087 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:43.432700 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:43.432776 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:43.433120 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:43.933050 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:43.933118 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:43.933424 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:44.432712 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:44.432790 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:44.433076 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:44.932987 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:44.933069 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:44.933451 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:31:44.933508 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:31:45.432652 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:45.432721 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:45.433020 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:45.932767 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:45.932842 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:45.933175 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:46.432686 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:46.432759 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:46.433102 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:46.932655 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:46.932727 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:46.933006 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:47.432684 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:47.432764 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:47.433090 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:31:47.433151 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:31:47.932730 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:47.932804 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:47.933206 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:48.432734 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:48.432808 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:48.433081 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:48.932704 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:48.932788 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:48.933086 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:49.432676 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:49.432758 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:49.433091 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:49.932841 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:49.932922 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:49.933214 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:31:49.933265 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:31:50.432697 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:50.432774 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:50.433083 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:50.932736 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:50.932814 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:50.933144 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:51.432737 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:51.432809 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:51.433098 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:51.932682 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:51.932765 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:51.933115 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:52.432819 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:52.432896 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:52.433241 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:31:52.433300 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:31:52.932671 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:52.932743 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:52.933011 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:53.432692 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:53.432763 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:53.433098 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:53.933090 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:53.933164 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:53.933488 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:54.432669 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:54.432748 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:54.433065 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:54.932936 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:54.933012 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:54.933364 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:31:54.933419 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:31:55.433079 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:55.433151 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:55.433486 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:55.933226 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:55.933296 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:55.933560 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:56.433428 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:56.433505 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:56.433878 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:56.932635 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:56.932709 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:56.933027 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:57.432658 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:57.432736 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:57.433044 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:31:57.433099 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:31:57.932711 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:57.932795 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:57.933103 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:58.432714 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:58.432787 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:58.433121 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:58.932651 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:58.932719 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:58.932975 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:59.432680 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:59.432759 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:59.433101 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:31:59.433157 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:31:59.932861 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:59.932939 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:59.933269 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:00.432699 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:00.432786 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:00.433188 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:00.932718 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:00.932793 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:00.933119 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:01.432702 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:01.432778 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:01.433132 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:32:01.433188 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:32:01.932953 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:01.933059 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:01.933405 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:02.433066 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:02.433138 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:02.433476 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:02.933290 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:02.933362 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:02.933678 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:03.433228 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:03.433307 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:03.433557 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:32:03.433604 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:32:03.933531 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:03.933606 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:03.933926 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:04.432634 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:04.432709 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:04.433045 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:04.932774 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:04.932840 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:04.933129 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:05.432832 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:05.432907 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:05.433248 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:05.932726 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:05.932801 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:05.933145 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:32:05.933201 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:32:06.432676 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:06.432754 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:06.433037 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:06.932744 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:06.932823 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:06.933214 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:07.432896 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:07.432968 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:07.433319 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:07.933026 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:07.933110 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:07.933393 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:32:07.933441 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:32:08.432735 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:08.432817 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:08.433284 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:08.932871 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:08.932978 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:08.933325 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:09.432676 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:09.432743 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:09.432980 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:09.932851 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:09.932929 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:09.933264 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:10.432726 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:10.432817 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:10.433167 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:32:10.433217 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:32:10.932660 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:10.932732 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:10.933027 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:11.432714 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:11.432790 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:11.433154 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:11.932874 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:11.932955 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:11.933284 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:12.432656 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:12.432727 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:12.432974 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:12.932659 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:12.932737 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:12.933062 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:32:12.933115 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:32:13.432673 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:13.432745 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:13.433062 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:13.932942 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:13.933022 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:13.933305 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:14.432666 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:14.432737 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:14.433054 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:14.932628 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:14.932702 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:14.933027 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:15.432739 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:15.432819 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:15.433087 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:32:15.433132 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:32:15.932808 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:15.932886 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:15.933232 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:16.432939 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:16.433082 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:16.433415 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:16.933224 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:16.933297 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:16.933611 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:17.433392 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:17.433466 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:17.433806 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:32:17.433861 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:32:17.933475 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:17.933557 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:17.933868 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:18.433222 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:18.433292 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:18.433591 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:18.933254 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:18.933331 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:18.933670 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:19.433471 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:19.433558 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:19.433901 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:32:19.433958 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:32:19.932590 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:19.932659 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:19.932906 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:20.432666 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:20.432742 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:20.433050 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:20.932683 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:20.932763 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:20.933127 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:21.432653 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:21.432736 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:21.433076 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:21.932743 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:21.932826 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:21.933126 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:32:21.933180 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:32:22.432753 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:22.432822 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:22.433257 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:22.932652 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:22.932719 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:22.933033 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:23.432711 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:23.432784 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:23.433133 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:23.933069 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:23.933144 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:23.933485 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:32:23.933542 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:32:24.433150 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:24.433216 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:24.433549 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:24.933587 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:24.933667 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:24.933983 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:25.432703 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:25.432780 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:25.433147 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:25.932823 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:25.932900 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:25.933226 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:26.432706 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:26.432784 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:26.433129 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:32:26.433184 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:32:26.932670 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:26.932744 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:26.933089 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:27.432659 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:27.432738 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:27.433018 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:27.932725 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:27.932802 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:27.933150 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:28.432726 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:28.432802 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:28.433119 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:28.932673 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:28.932744 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:28.933005 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:32:28.933048 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:32:29.432681 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:29.432758 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:29.433106 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:29.932862 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:29.932935 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:29.933263 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:30.432649 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:30.432724 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:30.433048 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:30.932738 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:30.932809 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:30.933151 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:32:30.933209 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:32:31.432726 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:31.432804 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:31.433153 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:31.932709 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:31.932785 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:31.933093 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:32.432864 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:32.432944 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:32.433316 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:32.932716 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:32.932791 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:32.933128 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:33.432801 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:33.432867 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:33.433124 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:32:33.433164 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:32:33.932966 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:33.933040 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:33.933351 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:34.432696 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:34.432771 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:34.433125 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:34.932830 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:34.932901 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:34.933235 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:35.432934 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:35.433024 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:35.433448 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:32:35.433504 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:32:35.933268 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:35.933342 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:35.933709 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:36.433228 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:36.433294 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:36.433588 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:36.933406 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:36.933485 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:36.933802 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:37.433562 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:37.433642 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:37.433939 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:32:37.433983 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:32:37.933183 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:37.933254 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:37.933510 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:38.433295 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:38.433365 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:38.433691 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:38.933541 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:38.933625 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:38.933982 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:39.432665 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:39.432740 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:39.432999 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:39.932625 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:39.932702 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:39.933037 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:32:39.933088 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:32:40.432600 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:40.432680 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:40.432996 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:40.932646 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:40.932715 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:40.933018 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:41.432713 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:41.432789 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:41.433153 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:41.932729 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:41.932806 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:41.933137 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:32:41.933194 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:32:42.432656 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:42.432729 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:42.433054 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:42.932710 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:42.932792 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:42.933135 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:43.432827 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:43.432907 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:43.433251 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:43.932972 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:43.933046 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:43.933297 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:32:43.933337 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:32:44.433057 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:44.433132 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:44.433467 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:44.933349 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:44.933425 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:44.933760 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:45.433200 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:45.433271 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:45.433522 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:45.933329 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:45.933403 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:45.933719 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:32:45.933777 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:32:46.433543 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:46.433636 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:46.433947 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:46.933240 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:46.933306 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:46.933602 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:47.433389 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:47.433467 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:47.433758 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:47.933580 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:47.933664 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:47.934006 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:32:47.934069 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:32:48.432643 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:48.432717 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:48.432979 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:48.932681 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:48.932755 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:48.933070 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:49.432675 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:49.432745 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:49.433131 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:49.933137 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:49.933206 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:49.933501 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:50.433255 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:50.433323 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:50.433610 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:32:50.433656 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:32:50.933294 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:50.933368 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:50.933657 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:51.433200 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:51.433282 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:51.433542 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:51.933307 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:51.933394 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:51.933715 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:52.433471 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:52.433553 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:52.433873 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:32:52.433936 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:32:52.933235 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:52.933316 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:52.933571 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:53.433369 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:53.433448 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:53.433777 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:53.933615 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:53.933693 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:53.934065 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:54.432659 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:54.432727 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:54.433303 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:54.933397 1187425 type.go:168] "Request Body" body=""
	W1209 04:32:54.933475 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): client rate limiter Wait returned an error: context deadline exceeded
	I1209 04:32:54.933495 1187425 node_ready.go:38] duration metric: took 6m0.001016343s for node "functional-667319" to be "Ready" ...
	I1209 04:32:54.936503 1187425 out.go:203] 
	W1209 04:32:54.939246 1187425 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1209 04:32:54.939264 1187425 out.go:285] * 
	W1209 04:32:54.941401 1187425 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 04:32:54.944197 1187425 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 09 04:26:52 functional-667319 containerd[5187]: time="2025-12-09T04:26:52.429451048Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 09 04:26:52 functional-667319 containerd[5187]: time="2025-12-09T04:26:52.429532046Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 09 04:26:52 functional-667319 containerd[5187]: time="2025-12-09T04:26:52.429642919Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 09 04:26:52 functional-667319 containerd[5187]: time="2025-12-09T04:26:52.429717796Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 09 04:26:52 functional-667319 containerd[5187]: time="2025-12-09T04:26:52.429781302Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 09 04:26:52 functional-667319 containerd[5187]: time="2025-12-09T04:26:52.429840328Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 09 04:26:52 functional-667319 containerd[5187]: time="2025-12-09T04:26:52.429902915Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 09 04:26:52 functional-667319 containerd[5187]: time="2025-12-09T04:26:52.429981665Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 09 04:26:52 functional-667319 containerd[5187]: time="2025-12-09T04:26:52.430052966Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 09 04:26:52 functional-667319 containerd[5187]: time="2025-12-09T04:26:52.430137616Z" level=info msg="Connect containerd service"
	Dec 09 04:26:52 functional-667319 containerd[5187]: time="2025-12-09T04:26:52.430482828Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 09 04:26:52 functional-667319 containerd[5187]: time="2025-12-09T04:26:52.431094871Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 09 04:26:52 functional-667319 containerd[5187]: time="2025-12-09T04:26:52.446888716Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 09 04:26:52 functional-667319 containerd[5187]: time="2025-12-09T04:26:52.446963922Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 09 04:26:52 functional-667319 containerd[5187]: time="2025-12-09T04:26:52.447004126Z" level=info msg="Start subscribing containerd event"
	Dec 09 04:26:52 functional-667319 containerd[5187]: time="2025-12-09T04:26:52.447056186Z" level=info msg="Start recovering state"
	Dec 09 04:26:52 functional-667319 containerd[5187]: time="2025-12-09T04:26:52.486039443Z" level=info msg="Start event monitor"
	Dec 09 04:26:52 functional-667319 containerd[5187]: time="2025-12-09T04:26:52.486090379Z" level=info msg="Start cni network conf syncer for default"
	Dec 09 04:26:52 functional-667319 containerd[5187]: time="2025-12-09T04:26:52.486100127Z" level=info msg="Start streaming server"
	Dec 09 04:26:52 functional-667319 containerd[5187]: time="2025-12-09T04:26:52.486109505Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 09 04:26:52 functional-667319 containerd[5187]: time="2025-12-09T04:26:52.486121919Z" level=info msg="runtime interface starting up..."
	Dec 09 04:26:52 functional-667319 containerd[5187]: time="2025-12-09T04:26:52.486128778Z" level=info msg="starting plugins..."
	Dec 09 04:26:52 functional-667319 containerd[5187]: time="2025-12-09T04:26:52.486144646Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 09 04:26:52 functional-667319 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 09 04:26:52 functional-667319 containerd[5187]: time="2025-12-09T04:26:52.488088785Z" level=info msg="containerd successfully booted in 0.083246s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:32:59.006034    8567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:32:59.006552    8567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:32:59.008170    8567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:32:59.008711    8567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:32:59.010279    8567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 9 03:13] overlayfs: idmapped layers are currently not supported
	[ +25.904254] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:14] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:16] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:18] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:19] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:30] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:32] overlayfs: idmapped layers are currently not supported
	[ +28.114653] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:33] overlayfs: idmapped layers are currently not supported
	[ +23.720849] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:34] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:35] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:36] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:37] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:38] overlayfs: idmapped layers are currently not supported
	[ +23.656275] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:39] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:57] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:58] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:00] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:02] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:03] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:05] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:06] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 04:32:59 up  7:15,  0 user,  load average: 0.21, 0.25, 0.79
	Linux functional-667319 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 09 04:32:55 functional-667319 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 09 04:32:56 functional-667319 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 811.
	Dec 09 04:32:56 functional-667319 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:32:56 functional-667319 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:32:56 functional-667319 kubelet[8418]: E1209 04:32:56.740675    8418 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 09 04:32:56 functional-667319 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 09 04:32:56 functional-667319 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 09 04:32:57 functional-667319 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 812.
	Dec 09 04:32:57 functional-667319 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:32:57 functional-667319 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:32:57 functional-667319 kubelet[8440]: E1209 04:32:57.494435    8440 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 09 04:32:57 functional-667319 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 09 04:32:57 functional-667319 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 09 04:32:58 functional-667319 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 813.
	Dec 09 04:32:58 functional-667319 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:32:58 functional-667319 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:32:58 functional-667319 kubelet[8475]: E1209 04:32:58.250425    8475 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 09 04:32:58 functional-667319 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 09 04:32:58 functional-667319 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 09 04:32:58 functional-667319 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 814.
	Dec 09 04:32:58 functional-667319 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:32:58 functional-667319 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:32:58 functional-667319 kubelet[8561]: E1209 04:32:58.981414    8561 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 09 04:32:58 functional-667319 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 09 04:32:58 functional-667319 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-667319 -n functional-667319
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-667319 -n functional-667319: exit status 2 (384.529104ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-667319" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (2.22s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (2.27s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-667319 kubectl -- --context functional-667319 get pods
functional_test.go:731: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-667319 kubectl -- --context functional-667319 get pods: exit status 1 (113.008706ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:734: failed to get pods. args "out/minikube-linux-arm64 -p functional-667319 kubectl -- --context functional-667319 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-667319
helpers_test.go:243: (dbg) docker inspect functional-667319:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e5b6511799c8d5c445a335a3bd5cc9a61b518fc27ac93dad8800da366ef32129",
	        "Created": "2025-12-09T04:18:34.060957311Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1182075,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-09T04:18:34.126944158Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:e4eb91ed18a24161fce60c7cdd660144ecd5b8c5029dc2dea2c5e423c2f48ce4",
	        "ResolvConfPath": "/var/lib/docker/containers/e5b6511799c8d5c445a335a3bd5cc9a61b518fc27ac93dad8800da366ef32129/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e5b6511799c8d5c445a335a3bd5cc9a61b518fc27ac93dad8800da366ef32129/hostname",
	        "HostsPath": "/var/lib/docker/containers/e5b6511799c8d5c445a335a3bd5cc9a61b518fc27ac93dad8800da366ef32129/hosts",
	        "LogPath": "/var/lib/docker/containers/e5b6511799c8d5c445a335a3bd5cc9a61b518fc27ac93dad8800da366ef32129/e5b6511799c8d5c445a335a3bd5cc9a61b518fc27ac93dad8800da366ef32129-json.log",
	        "Name": "/functional-667319",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-667319:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-667319",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e5b6511799c8d5c445a335a3bd5cc9a61b518fc27ac93dad8800da366ef32129",
	                "LowerDir": "/var/lib/docker/overlay2/b0239006282b6e4609a1f554d0a3fb94c749a13505795c8e4078cb2db194e8e0-init/diff:/var/lib/docker/overlay2/c44bb57aa59cc265266f37f2bb6e7ec0e7d641c3b4aeaa57e6d23deec6f0d1d4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b0239006282b6e4609a1f554d0a3fb94c749a13505795c8e4078cb2db194e8e0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b0239006282b6e4609a1f554d0a3fb94c749a13505795c8e4078cb2db194e8e0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b0239006282b6e4609a1f554d0a3fb94c749a13505795c8e4078cb2db194e8e0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-667319",
	                "Source": "/var/lib/docker/volumes/functional-667319/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-667319",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-667319",
	                "name.minikube.sigs.k8s.io": "functional-667319",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7c81dabcd9e57af9bce0bc0f5619f6ef3a27af43f4b649283a5bd778ab256415",
	            "SandboxKey": "/var/run/docker/netns/7c81dabcd9e5",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33900"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33901"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33904"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33902"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33903"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-667319": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "fe:40:bd:46:56:d8",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "88b3a65de70c15005c532a44219284d4df94e474ca5b78b04514c2f932b03beb",
	                    "EndpointID": "bdef7b156f4a28c1f641ae70b42db2750bb810ae6fe93fd65325e62eb232fe91",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-667319",
	                        "e5b6511799c8"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-667319 -n functional-667319
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-667319 -n functional-667319: exit status 2 (329.994643ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-667319 logs -n 25
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                          ARGS                                                                           │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-717497 image ls --format short --alsologtostderr                                                                                             │ functional-717497 │ jenkins │ v1.37.0 │ 09 Dec 25 04:18 UTC │ 09 Dec 25 04:18 UTC │
	│ image   │ functional-717497 image ls --format yaml --alsologtostderr                                                                                              │ functional-717497 │ jenkins │ v1.37.0 │ 09 Dec 25 04:18 UTC │ 09 Dec 25 04:18 UTC │
	│ ssh     │ functional-717497 ssh pgrep buildkitd                                                                                                                   │ functional-717497 │ jenkins │ v1.37.0 │ 09 Dec 25 04:18 UTC │                     │
	│ image   │ functional-717497 image ls --format json --alsologtostderr                                                                                              │ functional-717497 │ jenkins │ v1.37.0 │ 09 Dec 25 04:18 UTC │ 09 Dec 25 04:18 UTC │
	│ image   │ functional-717497 image build -t localhost/my-image:functional-717497 testdata/build --alsologtostderr                                                  │ functional-717497 │ jenkins │ v1.37.0 │ 09 Dec 25 04:18 UTC │ 09 Dec 25 04:18 UTC │
	│ image   │ functional-717497 image ls --format table --alsologtostderr                                                                                             │ functional-717497 │ jenkins │ v1.37.0 │ 09 Dec 25 04:18 UTC │ 09 Dec 25 04:18 UTC │
	│ image   │ functional-717497 image ls                                                                                                                              │ functional-717497 │ jenkins │ v1.37.0 │ 09 Dec 25 04:18 UTC │ 09 Dec 25 04:18 UTC │
	│ delete  │ -p functional-717497                                                                                                                                    │ functional-717497 │ jenkins │ v1.37.0 │ 09 Dec 25 04:18 UTC │ 09 Dec 25 04:18 UTC │
	│ start   │ -p functional-667319 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:18 UTC │                     │
	│ start   │ -p functional-667319 --alsologtostderr -v=8                                                                                                             │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:26 UTC │                     │
	│ cache   │ functional-667319 cache add registry.k8s.io/pause:3.1                                                                                                   │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:32 UTC │ 09 Dec 25 04:33 UTC │
	│ cache   │ functional-667319 cache add registry.k8s.io/pause:3.3                                                                                                   │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:33 UTC │ 09 Dec 25 04:33 UTC │
	│ cache   │ functional-667319 cache add registry.k8s.io/pause:latest                                                                                                │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:33 UTC │ 09 Dec 25 04:33 UTC │
	│ cache   │ functional-667319 cache add minikube-local-cache-test:functional-667319                                                                                 │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:33 UTC │ 09 Dec 25 04:33 UTC │
	│ cache   │ functional-667319 cache delete minikube-local-cache-test:functional-667319                                                                              │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:33 UTC │ 09 Dec 25 04:33 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 09 Dec 25 04:33 UTC │ 09 Dec 25 04:33 UTC │
	│ cache   │ list                                                                                                                                                    │ minikube          │ jenkins │ v1.37.0 │ 09 Dec 25 04:33 UTC │ 09 Dec 25 04:33 UTC │
	│ ssh     │ functional-667319 ssh sudo crictl images                                                                                                                │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:33 UTC │ 09 Dec 25 04:33 UTC │
	│ ssh     │ functional-667319 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                      │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:33 UTC │ 09 Dec 25 04:33 UTC │
	│ ssh     │ functional-667319 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                 │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:33 UTC │                     │
	│ cache   │ functional-667319 cache reload                                                                                                                          │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:33 UTC │ 09 Dec 25 04:33 UTC │
	│ ssh     │ functional-667319 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                 │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:33 UTC │ 09 Dec 25 04:33 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 09 Dec 25 04:33 UTC │ 09 Dec 25 04:33 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                                     │ minikube          │ jenkins │ v1.37.0 │ 09 Dec 25 04:33 UTC │ 09 Dec 25 04:33 UTC │
	│ kubectl │ functional-667319 kubectl -- --context functional-667319 get pods                                                                                       │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:33 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/09 04:26:49
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 04:26:49.901158 1187425 out.go:360] Setting OutFile to fd 1 ...
	I1209 04:26:49.901350 1187425 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 04:26:49.901380 1187425 out.go:374] Setting ErrFile to fd 2...
	I1209 04:26:49.901407 1187425 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 04:26:49.902126 1187425 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-1142328/.minikube/bin
	I1209 04:26:49.902570 1187425 out.go:368] Setting JSON to false
	I1209 04:26:49.903455 1187425 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":25733,"bootTime":1765228677,"procs":156,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1209 04:26:49.903532 1187425 start.go:143] virtualization:  
	I1209 04:26:49.907035 1187425 out.go:179] * [functional-667319] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1209 04:26:49.910766 1187425 out.go:179]   - MINIKUBE_LOCATION=22081
	I1209 04:26:49.910878 1187425 notify.go:221] Checking for updates...
	I1209 04:26:49.916570 1187425 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 04:26:49.919423 1187425 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22081-1142328/kubeconfig
	I1209 04:26:49.922184 1187425 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-1142328/.minikube
	I1209 04:26:49.924947 1187425 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1209 04:26:49.927723 1187425 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 04:26:49.930999 1187425 config.go:182] Loaded profile config "functional-667319": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1209 04:26:49.931139 1187425 driver.go:422] Setting default libvirt URI to qemu:///system
	I1209 04:26:49.958230 1187425 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1209 04:26:49.958344 1187425 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 04:26:50.018007 1187425 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-09 04:26:50.006695366 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1209 04:26:50.018130 1187425 docker.go:319] overlay module found
	I1209 04:26:50.021068 1187425 out.go:179] * Using the docker driver based on existing profile
	I1209 04:26:50.024068 1187425 start.go:309] selected driver: docker
	I1209 04:26:50.024096 1187425 start.go:927] validating driver "docker" against &{Name:functional-667319 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-667319 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disa
bleCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 04:26:50.024203 1187425 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 04:26:50.024322 1187425 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 04:26:50.086853 1187425 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-09 04:26:50.07716198 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1209 04:26:50.087299 1187425 cni.go:84] Creating CNI manager for ""
	I1209 04:26:50.087371 1187425 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1209 04:26:50.087429 1187425 start.go:353] cluster config:
	{Name:functional-667319 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-667319 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPa
th: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 04:26:50.090570 1187425 out.go:179] * Starting "functional-667319" primary control-plane node in "functional-667319" cluster
	I1209 04:26:50.093453 1187425 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1209 04:26:50.098431 1187425 out.go:179] * Pulling base image v0.0.48-1765184860-22066 ...
	I1209 04:26:50.101405 1187425 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1209 04:26:50.101471 1187425 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local docker daemon
	I1209 04:26:50.101485 1187425 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1209 04:26:50.101503 1187425 cache.go:65] Caching tarball of preloaded images
	I1209 04:26:50.101600 1187425 preload.go:238] Found /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1209 04:26:50.101616 1187425 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1209 04:26:50.101720 1187425 profile.go:143] Saving config to /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/config.json ...
	I1209 04:26:50.125607 1187425 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local docker daemon, skipping pull
	I1209 04:26:50.125633 1187425 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c exists in daemon, skipping load
	I1209 04:26:50.125648 1187425 cache.go:243] Successfully downloaded all kic artifacts
	I1209 04:26:50.125680 1187425 start.go:360] acquireMachinesLock for functional-667319: {Name:mk6c31f0747796f5f8ac8ea1653d6ee60fe2a47d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 04:26:50.125839 1187425 start.go:364] duration metric: took 130.318µs to acquireMachinesLock for "functional-667319"
	I1209 04:26:50.125869 1187425 start.go:96] Skipping create...Using existing machine configuration
	I1209 04:26:50.125878 1187425 fix.go:54] fixHost starting: 
	I1209 04:26:50.126147 1187425 cli_runner.go:164] Run: docker container inspect functional-667319 --format={{.State.Status}}
	I1209 04:26:50.147043 1187425 fix.go:112] recreateIfNeeded on functional-667319: state=Running err=<nil>
	W1209 04:26:50.147073 1187425 fix.go:138] unexpected machine state, will restart: <nil>
	I1209 04:26:50.150254 1187425 out.go:252] * Updating the running docker "functional-667319" container ...
	I1209 04:26:50.150291 1187425 machine.go:94] provisionDockerMachine start ...
	I1209 04:26:50.150379 1187425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-667319
	I1209 04:26:50.167513 1187425 main.go:143] libmachine: Using SSH client type: native
	I1209 04:26:50.167851 1187425 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 33900 <nil> <nil>}
	I1209 04:26:50.167868 1187425 main.go:143] libmachine: About to run SSH command:
	hostname
	I1209 04:26:50.327552 1187425 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-667319
	
	I1209 04:26:50.327578 1187425 ubuntu.go:182] provisioning hostname "functional-667319"
	I1209 04:26:50.327642 1187425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-667319
	I1209 04:26:50.345440 1187425 main.go:143] libmachine: Using SSH client type: native
	I1209 04:26:50.345757 1187425 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 33900 <nil> <nil>}
	I1209 04:26:50.345775 1187425 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-667319 && echo "functional-667319" | sudo tee /etc/hostname
	I1209 04:26:50.504917 1187425 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-667319
	
	I1209 04:26:50.505070 1187425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-667319
	I1209 04:26:50.522734 1187425 main.go:143] libmachine: Using SSH client type: native
	I1209 04:26:50.523054 1187425 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 33900 <nil> <nil>}
	I1209 04:26:50.523070 1187425 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-667319' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-667319/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-667319' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 04:26:50.676107 1187425 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1209 04:26:50.676133 1187425 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22081-1142328/.minikube CaCertPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22081-1142328/.minikube}
	I1209 04:26:50.676165 1187425 ubuntu.go:190] setting up certificates
	I1209 04:26:50.676182 1187425 provision.go:84] configureAuth start
	I1209 04:26:50.676245 1187425 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-667319
	I1209 04:26:50.692809 1187425 provision.go:143] copyHostCerts
	I1209 04:26:50.692850 1187425 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.pem
	I1209 04:26:50.692881 1187425 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.pem, removing ...
	I1209 04:26:50.692892 1187425 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.pem
	I1209 04:26:50.692964 1187425 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.pem (1078 bytes)
	I1209 04:26:50.693060 1187425 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22081-1142328/.minikube/cert.pem
	I1209 04:26:50.693088 1187425 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-1142328/.minikube/cert.pem, removing ...
	I1209 04:26:50.693096 1187425 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-1142328/.minikube/cert.pem
	I1209 04:26:50.693122 1187425 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22081-1142328/.minikube/cert.pem (1123 bytes)
	I1209 04:26:50.693175 1187425 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22081-1142328/.minikube/key.pem
	I1209 04:26:50.693199 1187425 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-1142328/.minikube/key.pem, removing ...
	I1209 04:26:50.693206 1187425 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-1142328/.minikube/key.pem
	I1209 04:26:50.693233 1187425 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22081-1142328/.minikube/key.pem (1675 bytes)
	I1209 04:26:50.693287 1187425 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca-key.pem org=jenkins.functional-667319 san=[127.0.0.1 192.168.49.2 functional-667319 localhost minikube]
	I1209 04:26:50.808459 1187425 provision.go:177] copyRemoteCerts
	I1209 04:26:50.808521 1187425 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 04:26:50.808568 1187425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-667319
	I1209 04:26:50.825015 1187425 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/functional-667319/id_rsa Username:docker}
	I1209 04:26:50.931904 1187425 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1209 04:26:50.931970 1187425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1209 04:26:50.950373 1187425 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1209 04:26:50.950430 1187425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1209 04:26:50.967052 1187425 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1209 04:26:50.967110 1187425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1209 04:26:50.984302 1187425 provision.go:87] duration metric: took 308.098174ms to configureAuth
	I1209 04:26:50.984386 1187425 ubuntu.go:206] setting minikube options for container-runtime
	I1209 04:26:50.984596 1187425 config.go:182] Loaded profile config "functional-667319": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1209 04:26:50.984634 1187425 machine.go:97] duration metric: took 834.335015ms to provisionDockerMachine
	I1209 04:26:50.984656 1187425 start.go:293] postStartSetup for "functional-667319" (driver="docker")
	I1209 04:26:50.984680 1187425 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 04:26:50.984759 1187425 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 04:26:50.984834 1187425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-667319
	I1209 04:26:51.005808 1187425 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/functional-667319/id_rsa Username:docker}
	I1209 04:26:51.112821 1187425 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 04:26:51.116496 1187425 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1209 04:26:51.116518 1187425 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1209 04:26:51.116523 1187425 command_runner.go:130] > VERSION_ID="12"
	I1209 04:26:51.116528 1187425 command_runner.go:130] > VERSION="12 (bookworm)"
	I1209 04:26:51.116532 1187425 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1209 04:26:51.116536 1187425 command_runner.go:130] > ID=debian
	I1209 04:26:51.116540 1187425 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1209 04:26:51.116545 1187425 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1209 04:26:51.116554 1187425 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1209 04:26:51.116627 1187425 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1209 04:26:51.116648 1187425 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1209 04:26:51.116659 1187425 filesync.go:126] Scanning /home/jenkins/minikube-integration/22081-1142328/.minikube/addons for local assets ...
	I1209 04:26:51.116715 1187425 filesync.go:126] Scanning /home/jenkins/minikube-integration/22081-1142328/.minikube/files for local assets ...
	I1209 04:26:51.116799 1187425 filesync.go:149] local asset: /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/ssl/certs/11442312.pem -> 11442312.pem in /etc/ssl/certs
	I1209 04:26:51.116806 1187425 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/ssl/certs/11442312.pem -> /etc/ssl/certs/11442312.pem
	I1209 04:26:51.116882 1187425 filesync.go:149] local asset: /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/test/nested/copy/1144231/hosts -> hosts in /etc/test/nested/copy/1144231
	I1209 04:26:51.116886 1187425 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/test/nested/copy/1144231/hosts -> /etc/test/nested/copy/1144231/hosts
	I1209 04:26:51.116933 1187425 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1144231
	I1209 04:26:51.124908 1187425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/ssl/certs/11442312.pem --> /etc/ssl/certs/11442312.pem (1708 bytes)
	I1209 04:26:51.143368 1187425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/test/nested/copy/1144231/hosts --> /etc/test/nested/copy/1144231/hosts (40 bytes)
	I1209 04:26:51.161824 1187425 start.go:296] duration metric: took 177.139225ms for postStartSetup
	I1209 04:26:51.161916 1187425 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1209 04:26:51.161982 1187425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-667319
	I1209 04:26:51.181271 1187425 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/functional-667319/id_rsa Username:docker}
	I1209 04:26:51.284406 1187425 command_runner.go:130] > 12%
	I1209 04:26:51.284922 1187425 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1209 04:26:51.288619 1187425 command_runner.go:130] > 172G
	I1209 04:26:51.288953 1187425 fix.go:56] duration metric: took 1.163071262s for fixHost
	I1209 04:26:51.288968 1187425 start.go:83] releasing machines lock for "functional-667319", held for 1.163111146s
	I1209 04:26:51.289042 1187425 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-667319
	I1209 04:26:51.305835 1187425 ssh_runner.go:195] Run: cat /version.json
	I1209 04:26:51.305885 1187425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-667319
	I1209 04:26:51.305897 1187425 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 04:26:51.305950 1187425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-667319
	I1209 04:26:51.325384 1187425 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/functional-667319/id_rsa Username:docker}
	I1209 04:26:51.327293 1187425 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/functional-667319/id_rsa Username:docker}
	I1209 04:26:51.427270 1187425 command_runner.go:130] > {"iso_version": "v1.37.0-1764843329-22032", "kicbase_version": "v0.0.48-1765184860-22066", "minikube_version": "v1.37.0", "commit": "27bcd52be11288bda2f9abde063aa47b22607695"}
	I1209 04:26:51.427541 1187425 ssh_runner.go:195] Run: systemctl --version
	I1209 04:26:51.517549 1187425 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1209 04:26:51.520210 1187425 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1209 04:26:51.520243 1187425 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1209 04:26:51.520320 1187425 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1209 04:26:51.524536 1187425 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1209 04:26:51.524574 1187425 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 04:26:51.524644 1187425 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 04:26:51.532138 1187425 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1209 04:26:51.532170 1187425 start.go:496] detecting cgroup driver to use...
	I1209 04:26:51.532202 1187425 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1209 04:26:51.532264 1187425 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1209 04:26:51.547055 1187425 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1209 04:26:51.559544 1187425 docker.go:218] disabling cri-docker service (if available) ...
	I1209 04:26:51.559644 1187425 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 04:26:51.574821 1187425 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 04:26:51.587447 1187425 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 04:26:51.703845 1187425 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 04:26:51.839863 1187425 docker.go:234] disabling docker service ...
	I1209 04:26:51.839930 1187425 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 04:26:51.856255 1187425 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 04:26:51.869081 1187425 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 04:26:51.995560 1187425 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 04:26:52.125293 1187425 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 04:26:52.137749 1187425 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 04:26:52.150135 1187425 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1209 04:26:52.151507 1187425 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1209 04:26:52.160197 1187425 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1209 04:26:52.168921 1187425 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1209 04:26:52.169008 1187425 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1209 04:26:52.177592 1187425 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1209 04:26:52.185997 1187425 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1209 04:26:52.194259 1187425 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1209 04:26:52.202620 1187425 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 04:26:52.210466 1187425 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1209 04:26:52.219232 1187425 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1209 04:26:52.227579 1187425 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1209 04:26:52.236059 1187425 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 04:26:52.242619 1187425 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1209 04:26:52.243485 1187425 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 04:26:52.250890 1187425 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 04:26:52.361246 1187425 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1209 04:26:52.490552 1187425 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1209 04:26:52.490653 1187425 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1209 04:26:52.497112 1187425 command_runner.go:130] >   File: /run/containerd/containerd.sock
	I1209 04:26:52.497174 1187425 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1209 04:26:52.497206 1187425 command_runner.go:130] > Device: 0,72	Inode: 1613        Links: 1
	I1209 04:26:52.497227 1187425 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1209 04:26:52.497247 1187425 command_runner.go:130] > Access: 2025-12-09 04:26:52.442263978 +0000
	I1209 04:26:52.497281 1187425 command_runner.go:130] > Modify: 2025-12-09 04:26:52.442263978 +0000
	I1209 04:26:52.497301 1187425 command_runner.go:130] > Change: 2025-12-09 04:26:52.442263978 +0000
	I1209 04:26:52.497319 1187425 command_runner.go:130] >  Birth: -
	I1209 04:26:52.497534 1187425 start.go:564] Will wait 60s for crictl version
	I1209 04:26:52.497619 1187425 ssh_runner.go:195] Run: which crictl
	I1209 04:26:52.501257 1187425 command_runner.go:130] > /usr/local/bin/crictl
	I1209 04:26:52.502001 1187425 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1209 04:26:52.535942 1187425 command_runner.go:130] > Version:  0.1.0
	I1209 04:26:52.535964 1187425 command_runner.go:130] > RuntimeName:  containerd
	I1209 04:26:52.535970 1187425 command_runner.go:130] > RuntimeVersion:  v2.2.0
	I1209 04:26:52.535975 1187425 command_runner.go:130] > RuntimeApiVersion:  v1
	I1209 04:26:52.535985 1187425 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1209 04:26:52.536096 1187425 ssh_runner.go:195] Run: containerd --version
	I1209 04:26:52.556939 1187425 command_runner.go:130] > containerd containerd.io v2.2.0 1c4457e00facac03ce1d75f7b6777a7a851e5c41
	I1209 04:26:52.562389 1187425 ssh_runner.go:195] Run: containerd --version
	I1209 04:26:52.582187 1187425 command_runner.go:130] > containerd containerd.io v2.2.0 1c4457e00facac03ce1d75f7b6777a7a851e5c41
	I1209 04:26:52.587659 1187425 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1209 04:26:52.590705 1187425 cli_runner.go:164] Run: docker network inspect functional-667319 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1209 04:26:52.606900 1187425 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1209 04:26:52.610849 1187425 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1209 04:26:52.610974 1187425 kubeadm.go:884] updating cluster {Name:functional-667319 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-667319 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 04:26:52.611074 1187425 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1209 04:26:52.611135 1187425 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 04:26:52.634142 1187425 command_runner.go:130] > {
	I1209 04:26:52.634161 1187425 command_runner.go:130] >   "images":  [
	I1209 04:26:52.634166 1187425 command_runner.go:130] >     {
	I1209 04:26:52.634175 1187425 command_runner.go:130] >       "id":  "sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1209 04:26:52.634180 1187425 command_runner.go:130] >       "repoTags":  [
	I1209 04:26:52.634186 1187425 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1209 04:26:52.634190 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.634194 1187425 command_runner.go:130] >       "repoDigests":  [
	I1209 04:26:52.634210 1187425 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"
	I1209 04:26:52.634213 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.634218 1187425 command_runner.go:130] >       "size":  "40636774",
	I1209 04:26:52.634222 1187425 command_runner.go:130] >       "username":  "",
	I1209 04:26:52.634230 1187425 command_runner.go:130] >       "pinned":  false
	I1209 04:26:52.634233 1187425 command_runner.go:130] >     },
	I1209 04:26:52.634236 1187425 command_runner.go:130] >     {
	I1209 04:26:52.634246 1187425 command_runner.go:130] >       "id":  "sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1209 04:26:52.634251 1187425 command_runner.go:130] >       "repoTags":  [
	I1209 04:26:52.634256 1187425 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1209 04:26:52.634259 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.634263 1187425 command_runner.go:130] >       "repoDigests":  [
	I1209 04:26:52.634271 1187425 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1209 04:26:52.634274 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.634278 1187425 command_runner.go:130] >       "size":  "8034419",
	I1209 04:26:52.634282 1187425 command_runner.go:130] >       "username":  "",
	I1209 04:26:52.634286 1187425 command_runner.go:130] >       "pinned":  false
	I1209 04:26:52.634289 1187425 command_runner.go:130] >     },
	I1209 04:26:52.634292 1187425 command_runner.go:130] >     {
	I1209 04:26:52.634298 1187425 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1209 04:26:52.634302 1187425 command_runner.go:130] >       "repoTags":  [
	I1209 04:26:52.634307 1187425 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1209 04:26:52.634310 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.634317 1187425 command_runner.go:130] >       "repoDigests":  [
	I1209 04:26:52.634325 1187425 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"
	I1209 04:26:52.634328 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.634333 1187425 command_runner.go:130] >       "size":  "21168808",
	I1209 04:26:52.634337 1187425 command_runner.go:130] >       "username":  "nonroot",
	I1209 04:26:52.634341 1187425 command_runner.go:130] >       "pinned":  false
	I1209 04:26:52.634349 1187425 command_runner.go:130] >     },
	I1209 04:26:52.634355 1187425 command_runner.go:130] >     {
	I1209 04:26:52.634362 1187425 command_runner.go:130] >       "id":  "sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1209 04:26:52.634367 1187425 command_runner.go:130] >       "repoTags":  [
	I1209 04:26:52.634372 1187425 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1209 04:26:52.634375 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.634379 1187425 command_runner.go:130] >       "repoDigests":  [
	I1209 04:26:52.634387 1187425 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"
	I1209 04:26:52.634393 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.634397 1187425 command_runner.go:130] >       "size":  "21136588",
	I1209 04:26:52.634402 1187425 command_runner.go:130] >       "uid":  {
	I1209 04:26:52.634405 1187425 command_runner.go:130] >         "value":  "0"
	I1209 04:26:52.634408 1187425 command_runner.go:130] >       },
	I1209 04:26:52.634412 1187425 command_runner.go:130] >       "username":  "",
	I1209 04:26:52.634415 1187425 command_runner.go:130] >       "pinned":  false
	I1209 04:26:52.634418 1187425 command_runner.go:130] >     },
	I1209 04:26:52.634421 1187425 command_runner.go:130] >     {
	I1209 04:26:52.634428 1187425 command_runner.go:130] >       "id":  "sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1209 04:26:52.634431 1187425 command_runner.go:130] >       "repoTags":  [
	I1209 04:26:52.634437 1187425 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1209 04:26:52.634440 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.634443 1187425 command_runner.go:130] >       "repoDigests":  [
	I1209 04:26:52.634451 1187425 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58"
	I1209 04:26:52.634453 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.634457 1187425 command_runner.go:130] >       "size":  "24678359",
	I1209 04:26:52.634461 1187425 command_runner.go:130] >       "uid":  {
	I1209 04:26:52.634468 1187425 command_runner.go:130] >         "value":  "0"
	I1209 04:26:52.634471 1187425 command_runner.go:130] >       },
	I1209 04:26:52.634474 1187425 command_runner.go:130] >       "username":  "",
	I1209 04:26:52.634478 1187425 command_runner.go:130] >       "pinned":  false
	I1209 04:26:52.634480 1187425 command_runner.go:130] >     },
	I1209 04:26:52.634483 1187425 command_runner.go:130] >     {
	I1209 04:26:52.634490 1187425 command_runner.go:130] >       "id":  "sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1209 04:26:52.634493 1187425 command_runner.go:130] >       "repoTags":  [
	I1209 04:26:52.634499 1187425 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1209 04:26:52.634501 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.634505 1187425 command_runner.go:130] >       "repoDigests":  [
	I1209 04:26:52.634513 1187425 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d"
	I1209 04:26:52.634516 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.634520 1187425 command_runner.go:130] >       "size":  "20661043",
	I1209 04:26:52.634523 1187425 command_runner.go:130] >       "uid":  {
	I1209 04:26:52.634532 1187425 command_runner.go:130] >         "value":  "0"
	I1209 04:26:52.634535 1187425 command_runner.go:130] >       },
	I1209 04:26:52.634539 1187425 command_runner.go:130] >       "username":  "",
	I1209 04:26:52.634543 1187425 command_runner.go:130] >       "pinned":  false
	I1209 04:26:52.634546 1187425 command_runner.go:130] >     },
	I1209 04:26:52.634548 1187425 command_runner.go:130] >     {
	I1209 04:26:52.634555 1187425 command_runner.go:130] >       "id":  "sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1209 04:26:52.634558 1187425 command_runner.go:130] >       "repoTags":  [
	I1209 04:26:52.634563 1187425 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1209 04:26:52.634566 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.634569 1187425 command_runner.go:130] >       "repoDigests":  [
	I1209 04:26:52.634577 1187425 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1209 04:26:52.634580 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.634583 1187425 command_runner.go:130] >       "size":  "22429671",
	I1209 04:26:52.634587 1187425 command_runner.go:130] >       "username":  "",
	I1209 04:26:52.634591 1187425 command_runner.go:130] >       "pinned":  false
	I1209 04:26:52.634594 1187425 command_runner.go:130] >     },
	I1209 04:26:52.634597 1187425 command_runner.go:130] >     {
	I1209 04:26:52.634604 1187425 command_runner.go:130] >       "id":  "sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1209 04:26:52.634607 1187425 command_runner.go:130] >       "repoTags":  [
	I1209 04:26:52.634613 1187425 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1209 04:26:52.634616 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.634620 1187425 command_runner.go:130] >       "repoDigests":  [
	I1209 04:26:52.634627 1187425 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6"
	I1209 04:26:52.634630 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.634634 1187425 command_runner.go:130] >       "size":  "15391364",
	I1209 04:26:52.634638 1187425 command_runner.go:130] >       "uid":  {
	I1209 04:26:52.634641 1187425 command_runner.go:130] >         "value":  "0"
	I1209 04:26:52.634644 1187425 command_runner.go:130] >       },
	I1209 04:26:52.634649 1187425 command_runner.go:130] >       "username":  "",
	I1209 04:26:52.634653 1187425 command_runner.go:130] >       "pinned":  false
	I1209 04:26:52.634655 1187425 command_runner.go:130] >     },
	I1209 04:26:52.634659 1187425 command_runner.go:130] >     {
	I1209 04:26:52.634670 1187425 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1209 04:26:52.634674 1187425 command_runner.go:130] >       "repoTags":  [
	I1209 04:26:52.634678 1187425 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1209 04:26:52.634681 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.634685 1187425 command_runner.go:130] >       "repoDigests":  [
	I1209 04:26:52.634693 1187425 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"
	I1209 04:26:52.634695 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.634699 1187425 command_runner.go:130] >       "size":  "267939",
	I1209 04:26:52.634703 1187425 command_runner.go:130] >       "uid":  {
	I1209 04:26:52.634706 1187425 command_runner.go:130] >         "value":  "65535"
	I1209 04:26:52.634709 1187425 command_runner.go:130] >       },
	I1209 04:26:52.634713 1187425 command_runner.go:130] >       "username":  "",
	I1209 04:26:52.634717 1187425 command_runner.go:130] >       "pinned":  true
	I1209 04:26:52.634720 1187425 command_runner.go:130] >     }
	I1209 04:26:52.634723 1187425 command_runner.go:130] >   ]
	I1209 04:26:52.634726 1187425 command_runner.go:130] > }
	I1209 04:26:52.636238 1187425 containerd.go:627] all images are preloaded for containerd runtime.
	I1209 04:26:52.636265 1187425 containerd.go:534] Images already preloaded, skipping extraction
	I1209 04:26:52.636328 1187425 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 04:26:52.662300 1187425 command_runner.go:130] > {
	I1209 04:26:52.662318 1187425 command_runner.go:130] >   "images":  [
	I1209 04:26:52.662323 1187425 command_runner.go:130] >     {
	I1209 04:26:52.662332 1187425 command_runner.go:130] >       "id":  "sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1209 04:26:52.662349 1187425 command_runner.go:130] >       "repoTags":  [
	I1209 04:26:52.662355 1187425 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1209 04:26:52.662358 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.662363 1187425 command_runner.go:130] >       "repoDigests":  [
	I1209 04:26:52.662375 1187425 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"
	I1209 04:26:52.662379 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.662383 1187425 command_runner.go:130] >       "size":  "40636774",
	I1209 04:26:52.662388 1187425 command_runner.go:130] >       "username":  "",
	I1209 04:26:52.662392 1187425 command_runner.go:130] >       "pinned":  false
	I1209 04:26:52.662395 1187425 command_runner.go:130] >     },
	I1209 04:26:52.662398 1187425 command_runner.go:130] >     {
	I1209 04:26:52.662406 1187425 command_runner.go:130] >       "id":  "sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1209 04:26:52.662410 1187425 command_runner.go:130] >       "repoTags":  [
	I1209 04:26:52.662416 1187425 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1209 04:26:52.662420 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.662424 1187425 command_runner.go:130] >       "repoDigests":  [
	I1209 04:26:52.662436 1187425 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1209 04:26:52.662440 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.662444 1187425 command_runner.go:130] >       "size":  "8034419",
	I1209 04:26:52.662448 1187425 command_runner.go:130] >       "username":  "",
	I1209 04:26:52.662452 1187425 command_runner.go:130] >       "pinned":  false
	I1209 04:26:52.662460 1187425 command_runner.go:130] >     },
	I1209 04:26:52.662463 1187425 command_runner.go:130] >     {
	I1209 04:26:52.662470 1187425 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1209 04:26:52.662474 1187425 command_runner.go:130] >       "repoTags":  [
	I1209 04:26:52.662479 1187425 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1209 04:26:52.662482 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.662488 1187425 command_runner.go:130] >       "repoDigests":  [
	I1209 04:26:52.662496 1187425 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"
	I1209 04:26:52.662500 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.662504 1187425 command_runner.go:130] >       "size":  "21168808",
	I1209 04:26:52.662508 1187425 command_runner.go:130] >       "username":  "nonroot",
	I1209 04:26:52.662512 1187425 command_runner.go:130] >       "pinned":  false
	I1209 04:26:52.662515 1187425 command_runner.go:130] >     },
	I1209 04:26:52.662519 1187425 command_runner.go:130] >     {
	I1209 04:26:52.662525 1187425 command_runner.go:130] >       "id":  "sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1209 04:26:52.662529 1187425 command_runner.go:130] >       "repoTags":  [
	I1209 04:26:52.662534 1187425 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1209 04:26:52.662538 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.662541 1187425 command_runner.go:130] >       "repoDigests":  [
	I1209 04:26:52.662549 1187425 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"
	I1209 04:26:52.662552 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.662556 1187425 command_runner.go:130] >       "size":  "21136588",
	I1209 04:26:52.662561 1187425 command_runner.go:130] >       "uid":  {
	I1209 04:26:52.662565 1187425 command_runner.go:130] >         "value":  "0"
	I1209 04:26:52.662568 1187425 command_runner.go:130] >       },
	I1209 04:26:52.662572 1187425 command_runner.go:130] >       "username":  "",
	I1209 04:26:52.662576 1187425 command_runner.go:130] >       "pinned":  false
	I1209 04:26:52.662579 1187425 command_runner.go:130] >     },
	I1209 04:26:52.662585 1187425 command_runner.go:130] >     {
	I1209 04:26:52.662592 1187425 command_runner.go:130] >       "id":  "sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1209 04:26:52.662596 1187425 command_runner.go:130] >       "repoTags":  [
	I1209 04:26:52.662601 1187425 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1209 04:26:52.662605 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.662609 1187425 command_runner.go:130] >       "repoDigests":  [
	I1209 04:26:52.662617 1187425 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58"
	I1209 04:26:52.662619 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.662624 1187425 command_runner.go:130] >       "size":  "24678359",
	I1209 04:26:52.662627 1187425 command_runner.go:130] >       "uid":  {
	I1209 04:26:52.662639 1187425 command_runner.go:130] >         "value":  "0"
	I1209 04:26:52.662642 1187425 command_runner.go:130] >       },
	I1209 04:26:52.662646 1187425 command_runner.go:130] >       "username":  "",
	I1209 04:26:52.662650 1187425 command_runner.go:130] >       "pinned":  false
	I1209 04:26:52.662653 1187425 command_runner.go:130] >     },
	I1209 04:26:52.662656 1187425 command_runner.go:130] >     {
	I1209 04:26:52.662663 1187425 command_runner.go:130] >       "id":  "sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1209 04:26:52.662667 1187425 command_runner.go:130] >       "repoTags":  [
	I1209 04:26:52.662672 1187425 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1209 04:26:52.662675 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.662679 1187425 command_runner.go:130] >       "repoDigests":  [
	I1209 04:26:52.662687 1187425 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d"
	I1209 04:26:52.662690 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.662694 1187425 command_runner.go:130] >       "size":  "20661043",
	I1209 04:26:52.662697 1187425 command_runner.go:130] >       "uid":  {
	I1209 04:26:52.662701 1187425 command_runner.go:130] >         "value":  "0"
	I1209 04:26:52.662704 1187425 command_runner.go:130] >       },
	I1209 04:26:52.662707 1187425 command_runner.go:130] >       "username":  "",
	I1209 04:26:52.662712 1187425 command_runner.go:130] >       "pinned":  false
	I1209 04:26:52.662714 1187425 command_runner.go:130] >     },
	I1209 04:26:52.662717 1187425 command_runner.go:130] >     {
	I1209 04:26:52.662725 1187425 command_runner.go:130] >       "id":  "sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1209 04:26:52.662729 1187425 command_runner.go:130] >       "repoTags":  [
	I1209 04:26:52.662737 1187425 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1209 04:26:52.662741 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.662744 1187425 command_runner.go:130] >       "repoDigests":  [
	I1209 04:26:52.662752 1187425 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1209 04:26:52.662755 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.662759 1187425 command_runner.go:130] >       "size":  "22429671",
	I1209 04:26:52.662763 1187425 command_runner.go:130] >       "username":  "",
	I1209 04:26:52.662767 1187425 command_runner.go:130] >       "pinned":  false
	I1209 04:26:52.662770 1187425 command_runner.go:130] >     },
	I1209 04:26:52.662774 1187425 command_runner.go:130] >     {
	I1209 04:26:52.662781 1187425 command_runner.go:130] >       "id":  "sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1209 04:26:52.662785 1187425 command_runner.go:130] >       "repoTags":  [
	I1209 04:26:52.662791 1187425 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1209 04:26:52.662794 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.662798 1187425 command_runner.go:130] >       "repoDigests":  [
	I1209 04:26:52.662805 1187425 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6"
	I1209 04:26:52.662808 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.662813 1187425 command_runner.go:130] >       "size":  "15391364",
	I1209 04:26:52.662816 1187425 command_runner.go:130] >       "uid":  {
	I1209 04:26:52.662820 1187425 command_runner.go:130] >         "value":  "0"
	I1209 04:26:52.662823 1187425 command_runner.go:130] >       },
	I1209 04:26:52.662827 1187425 command_runner.go:130] >       "username":  "",
	I1209 04:26:52.662831 1187425 command_runner.go:130] >       "pinned":  false
	I1209 04:26:52.662834 1187425 command_runner.go:130] >     },
	I1209 04:26:52.662837 1187425 command_runner.go:130] >     {
	I1209 04:26:52.662843 1187425 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1209 04:26:52.662847 1187425 command_runner.go:130] >       "repoTags":  [
	I1209 04:26:52.662852 1187425 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1209 04:26:52.662855 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.662858 1187425 command_runner.go:130] >       "repoDigests":  [
	I1209 04:26:52.662866 1187425 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"
	I1209 04:26:52.662869 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.662873 1187425 command_runner.go:130] >       "size":  "267939",
	I1209 04:26:52.662881 1187425 command_runner.go:130] >       "uid":  {
	I1209 04:26:52.662886 1187425 command_runner.go:130] >         "value":  "65535"
	I1209 04:26:52.662890 1187425 command_runner.go:130] >       },
	I1209 04:26:52.662894 1187425 command_runner.go:130] >       "username":  "",
	I1209 04:26:52.662897 1187425 command_runner.go:130] >       "pinned":  true
	I1209 04:26:52.662900 1187425 command_runner.go:130] >     }
	I1209 04:26:52.662903 1187425 command_runner.go:130] >   ]
	I1209 04:26:52.662906 1187425 command_runner.go:130] > }
	I1209 04:26:52.665193 1187425 containerd.go:627] all images are preloaded for containerd runtime.
	I1209 04:26:52.665212 1187425 cache_images.go:86] Images are preloaded, skipping loading
	I1209 04:26:52.665219 1187425 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
	I1209 04:26:52.665322 1187425 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-667319 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-667319 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 04:26:52.665384 1187425 ssh_runner.go:195] Run: sudo crictl info
	I1209 04:26:52.686718 1187425 command_runner.go:130] > {
	I1209 04:26:52.686786 1187425 command_runner.go:130] >   "cniconfig": {
	I1209 04:26:52.686805 1187425 command_runner.go:130] >     "Networks": [
	I1209 04:26:52.686825 1187425 command_runner.go:130] >       {
	I1209 04:26:52.686864 1187425 command_runner.go:130] >         "Config": {
	I1209 04:26:52.686886 1187425 command_runner.go:130] >           "CNIVersion": "0.3.1",
	I1209 04:26:52.686905 1187425 command_runner.go:130] >           "Name": "cni-loopback",
	I1209 04:26:52.686923 1187425 command_runner.go:130] >           "Plugins": [
	I1209 04:26:52.686940 1187425 command_runner.go:130] >             {
	I1209 04:26:52.686967 1187425 command_runner.go:130] >               "Network": {
	I1209 04:26:52.686991 1187425 command_runner.go:130] >                 "ipam": {},
	I1209 04:26:52.687011 1187425 command_runner.go:130] >                 "type": "loopback"
	I1209 04:26:52.687028 1187425 command_runner.go:130] >               },
	I1209 04:26:52.687048 1187425 command_runner.go:130] >               "Source": "{\"type\":\"loopback\"}"
	I1209 04:26:52.687074 1187425 command_runner.go:130] >             }
	I1209 04:26:52.687097 1187425 command_runner.go:130] >           ],
	I1209 04:26:52.687120 1187425 command_runner.go:130] >           "Source": "{\n\"cniVersion\": \"0.3.1\",\n\"name\": \"cni-loopback\",\n\"plugins\": [{\n  \"type\": \"loopback\"\n}]\n}"
	I1209 04:26:52.687138 1187425 command_runner.go:130] >         },
	I1209 04:26:52.687160 1187425 command_runner.go:130] >         "IFName": "lo"
	I1209 04:26:52.687191 1187425 command_runner.go:130] >       }
	I1209 04:26:52.687207 1187425 command_runner.go:130] >     ],
	I1209 04:26:52.687225 1187425 command_runner.go:130] >     "PluginConfDir": "/etc/cni/net.d",
	I1209 04:26:52.687243 1187425 command_runner.go:130] >     "PluginDirs": [
	I1209 04:26:52.687272 1187425 command_runner.go:130] >       "/opt/cni/bin"
	I1209 04:26:52.687293 1187425 command_runner.go:130] >     ],
	I1209 04:26:52.687317 1187425 command_runner.go:130] >     "PluginMaxConfNum": 1,
	I1209 04:26:52.687334 1187425 command_runner.go:130] >     "Prefix": "eth"
	I1209 04:26:52.687351 1187425 command_runner.go:130] >   },
	I1209 04:26:52.687378 1187425 command_runner.go:130] >   "config": {
	I1209 04:26:52.687401 1187425 command_runner.go:130] >     "cdiSpecDirs": [
	I1209 04:26:52.687418 1187425 command_runner.go:130] >       "/etc/cdi",
	I1209 04:26:52.687438 1187425 command_runner.go:130] >       "/var/run/cdi"
	I1209 04:26:52.687457 1187425 command_runner.go:130] >     ],
	I1209 04:26:52.687483 1187425 command_runner.go:130] >     "cni": {
	I1209 04:26:52.687505 1187425 command_runner.go:130] >       "binDir": "",
	I1209 04:26:52.687560 1187425 command_runner.go:130] >       "binDirs": [
	I1209 04:26:52.687588 1187425 command_runner.go:130] >         "/opt/cni/bin"
	I1209 04:26:52.687609 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.687628 1187425 command_runner.go:130] >       "confDir": "/etc/cni/net.d",
	I1209 04:26:52.687646 1187425 command_runner.go:130] >       "confTemplate": "",
	I1209 04:26:52.687665 1187425 command_runner.go:130] >       "ipPref": "",
	I1209 04:26:52.687692 1187425 command_runner.go:130] >       "maxConfNum": 1,
	I1209 04:26:52.687715 1187425 command_runner.go:130] >       "setupSerially": false,
	I1209 04:26:52.687733 1187425 command_runner.go:130] >       "useInternalLoopback": false
	I1209 04:26:52.687749 1187425 command_runner.go:130] >     },
	I1209 04:26:52.687775 1187425 command_runner.go:130] >     "containerd": {
	I1209 04:26:52.687802 1187425 command_runner.go:130] >       "defaultRuntimeName": "runc",
	I1209 04:26:52.687825 1187425 command_runner.go:130] >       "ignoreBlockIONotEnabledErrors": false,
	I1209 04:26:52.687845 1187425 command_runner.go:130] >       "ignoreRdtNotEnabledErrors": false,
	I1209 04:26:52.687861 1187425 command_runner.go:130] >       "runtimes": {
	I1209 04:26:52.687878 1187425 command_runner.go:130] >         "runc": {
	I1209 04:26:52.687905 1187425 command_runner.go:130] >           "ContainerAnnotations": null,
	I1209 04:26:52.687929 1187425 command_runner.go:130] >           "PodAnnotations": null,
	I1209 04:26:52.687948 1187425 command_runner.go:130] >           "baseRuntimeSpec": "",
	I1209 04:26:52.687965 1187425 command_runner.go:130] >           "cgroupWritable": false,
	I1209 04:26:52.687982 1187425 command_runner.go:130] >           "cniConfDir": "",
	I1209 04:26:52.688009 1187425 command_runner.go:130] >           "cniMaxConfNum": 0,
	I1209 04:26:52.688042 1187425 command_runner.go:130] >           "io_type": "",
	I1209 04:26:52.688055 1187425 command_runner.go:130] >           "options": {
	I1209 04:26:52.688060 1187425 command_runner.go:130] >             "BinaryName": "",
	I1209 04:26:52.688065 1187425 command_runner.go:130] >             "CriuImagePath": "",
	I1209 04:26:52.688070 1187425 command_runner.go:130] >             "CriuWorkPath": "",
	I1209 04:26:52.688078 1187425 command_runner.go:130] >             "IoGid": 0,
	I1209 04:26:52.688082 1187425 command_runner.go:130] >             "IoUid": 0,
	I1209 04:26:52.688086 1187425 command_runner.go:130] >             "NoNewKeyring": false,
	I1209 04:26:52.688093 1187425 command_runner.go:130] >             "Root": "",
	I1209 04:26:52.688097 1187425 command_runner.go:130] >             "ShimCgroup": "",
	I1209 04:26:52.688109 1187425 command_runner.go:130] >             "SystemdCgroup": false
	I1209 04:26:52.688113 1187425 command_runner.go:130] >           },
	I1209 04:26:52.688118 1187425 command_runner.go:130] >           "privileged_without_host_devices": false,
	I1209 04:26:52.688128 1187425 command_runner.go:130] >           "privileged_without_host_devices_all_devices_allowed": false,
	I1209 04:26:52.688138 1187425 command_runner.go:130] >           "runtimePath": "",
	I1209 04:26:52.688145 1187425 command_runner.go:130] >           "runtimeType": "io.containerd.runc.v2",
	I1209 04:26:52.688153 1187425 command_runner.go:130] >           "sandboxer": "podsandbox",
	I1209 04:26:52.688157 1187425 command_runner.go:130] >           "snapshotter": ""
	I1209 04:26:52.688161 1187425 command_runner.go:130] >         }
	I1209 04:26:52.688164 1187425 command_runner.go:130] >       }
	I1209 04:26:52.688167 1187425 command_runner.go:130] >     },
	I1209 04:26:52.688181 1187425 command_runner.go:130] >     "containerdEndpoint": "/run/containerd/containerd.sock",
	I1209 04:26:52.688190 1187425 command_runner.go:130] >     "containerdRootDir": "/var/lib/containerd",
	I1209 04:26:52.688198 1187425 command_runner.go:130] >     "device_ownership_from_security_context": false,
	I1209 04:26:52.688205 1187425 command_runner.go:130] >     "disableApparmor": false,
	I1209 04:26:52.688210 1187425 command_runner.go:130] >     "disableHugetlbController": true,
	I1209 04:26:52.688218 1187425 command_runner.go:130] >     "disableProcMount": false,
	I1209 04:26:52.688223 1187425 command_runner.go:130] >     "drainExecSyncIOTimeout": "0s",
	I1209 04:26:52.688231 1187425 command_runner.go:130] >     "enableCDI": true,
	I1209 04:26:52.688235 1187425 command_runner.go:130] >     "enableSelinux": false,
	I1209 04:26:52.688240 1187425 command_runner.go:130] >     "enableUnprivilegedICMP": true,
	I1209 04:26:52.688248 1187425 command_runner.go:130] >     "enableUnprivilegedPorts": true,
	I1209 04:26:52.688253 1187425 command_runner.go:130] >     "ignoreDeprecationWarnings": null,
	I1209 04:26:52.688259 1187425 command_runner.go:130] >     "ignoreImageDefinedVolumes": false,
	I1209 04:26:52.688269 1187425 command_runner.go:130] >     "maxContainerLogLineSize": 16384,
	I1209 04:26:52.688278 1187425 command_runner.go:130] >     "netnsMountsUnderStateDir": false,
	I1209 04:26:52.688282 1187425 command_runner.go:130] >     "restrictOOMScoreAdj": false,
	I1209 04:26:52.688293 1187425 command_runner.go:130] >     "rootDir": "/var/lib/containerd/io.containerd.grpc.v1.cri",
	I1209 04:26:52.688297 1187425 command_runner.go:130] >     "selinuxCategoryRange": 1024,
	I1209 04:26:52.688306 1187425 command_runner.go:130] >     "stateDir": "/run/containerd/io.containerd.grpc.v1.cri",
	I1209 04:26:52.688312 1187425 command_runner.go:130] >     "tolerateMissingHugetlbController": true,
	I1209 04:26:52.688320 1187425 command_runner.go:130] >     "unsetSeccompProfile": ""
	I1209 04:26:52.688323 1187425 command_runner.go:130] >   },
	I1209 04:26:52.688327 1187425 command_runner.go:130] >   "features": {
	I1209 04:26:52.688332 1187425 command_runner.go:130] >     "supplemental_groups_policy": true
	I1209 04:26:52.688337 1187425 command_runner.go:130] >   },
	I1209 04:26:52.688341 1187425 command_runner.go:130] >   "golang": "go1.24.9",
	I1209 04:26:52.688355 1187425 command_runner.go:130] >   "lastCNILoadStatus": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1209 04:26:52.688368 1187425 command_runner.go:130] >   "lastCNILoadStatus.default": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1209 04:26:52.688376 1187425 command_runner.go:130] >   "runtimeHandlers": [
	I1209 04:26:52.688379 1187425 command_runner.go:130] >     {
	I1209 04:26:52.688388 1187425 command_runner.go:130] >       "features": {
	I1209 04:26:52.688394 1187425 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1209 04:26:52.688403 1187425 command_runner.go:130] >         "user_namespaces": true
	I1209 04:26:52.688406 1187425 command_runner.go:130] >       }
	I1209 04:26:52.688409 1187425 command_runner.go:130] >     },
	I1209 04:26:52.688412 1187425 command_runner.go:130] >     {
	I1209 04:26:52.688416 1187425 command_runner.go:130] >       "features": {
	I1209 04:26:52.688423 1187425 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1209 04:26:52.688432 1187425 command_runner.go:130] >         "user_namespaces": true
	I1209 04:26:52.688435 1187425 command_runner.go:130] >       },
	I1209 04:26:52.688439 1187425 command_runner.go:130] >       "name": "runc"
	I1209 04:26:52.688446 1187425 command_runner.go:130] >     }
	I1209 04:26:52.688449 1187425 command_runner.go:130] >   ],
	I1209 04:26:52.688457 1187425 command_runner.go:130] >   "status": {
	I1209 04:26:52.688461 1187425 command_runner.go:130] >     "conditions": [
	I1209 04:26:52.688469 1187425 command_runner.go:130] >       {
	I1209 04:26:52.688476 1187425 command_runner.go:130] >         "message": "",
	I1209 04:26:52.688484 1187425 command_runner.go:130] >         "reason": "",
	I1209 04:26:52.688488 1187425 command_runner.go:130] >         "status": true,
	I1209 04:26:52.688493 1187425 command_runner.go:130] >         "type": "RuntimeReady"
	I1209 04:26:52.688497 1187425 command_runner.go:130] >       },
	I1209 04:26:52.688502 1187425 command_runner.go:130] >       {
	I1209 04:26:52.688509 1187425 command_runner.go:130] >         "message": "Network plugin returns error: cni plugin not initialized",
	I1209 04:26:52.688518 1187425 command_runner.go:130] >         "reason": "NetworkPluginNotReady",
	I1209 04:26:52.688522 1187425 command_runner.go:130] >         "status": false,
	I1209 04:26:52.688530 1187425 command_runner.go:130] >         "type": "NetworkReady"
	I1209 04:26:52.688534 1187425 command_runner.go:130] >       },
	I1209 04:26:52.688541 1187425 command_runner.go:130] >       {
	I1209 04:26:52.688568 1187425 command_runner.go:130] >         "message": "{\"io.containerd.deprecation/cgroup-v1\":\"The support for cgroup v1 is deprecated since containerd v2.2 and will be removed by no later than May 2029. Upgrade the host to use cgroup v2.\"}",
	I1209 04:26:52.688578 1187425 command_runner.go:130] >         "reason": "ContainerdHasDeprecationWarnings",
	I1209 04:26:52.688584 1187425 command_runner.go:130] >         "status": false,
	I1209 04:26:52.688590 1187425 command_runner.go:130] >         "type": "ContainerdHasNoDeprecationWarnings"
	I1209 04:26:52.688595 1187425 command_runner.go:130] >       }
	I1209 04:26:52.688598 1187425 command_runner.go:130] >     ]
	I1209 04:26:52.688606 1187425 command_runner.go:130] >   }
	I1209 04:26:52.688609 1187425 command_runner.go:130] > }
	I1209 04:26:52.690920 1187425 cni.go:84] Creating CNI manager for ""
	I1209 04:26:52.690942 1187425 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1209 04:26:52.690965 1187425 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1209 04:26:52.690987 1187425 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-667319 NodeName:functional-667319 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1209 04:26:52.691101 1187425 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-667319"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 04:26:52.691179 1187425 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1209 04:26:52.697985 1187425 command_runner.go:130] > kubeadm
	I1209 04:26:52.698006 1187425 command_runner.go:130] > kubectl
	I1209 04:26:52.698010 1187425 command_runner.go:130] > kubelet
	I1209 04:26:52.698825 1187425 binaries.go:51] Found k8s binaries, skipping transfer
	I1209 04:26:52.698896 1187425 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1209 04:26:52.706638 1187425 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1209 04:26:52.718822 1187425 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1209 04:26:52.731825 1187425 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1209 04:26:52.744962 1187425 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1209 04:26:52.748733 1187425 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1209 04:26:52.748987 1187425 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 04:26:52.855986 1187425 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 04:26:53.181367 1187425 certs.go:69] Setting up /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319 for IP: 192.168.49.2
	I1209 04:26:53.181392 1187425 certs.go:195] generating shared ca certs ...
	I1209 04:26:53.181408 1187425 certs.go:227] acquiring lock for ca certs: {Name:mk15788702f8c4e23b5aeab3f44961d296fab259 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 04:26:53.181570 1187425 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.key
	I1209 04:26:53.181618 1187425 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/proxy-client-ca.key
	I1209 04:26:53.181630 1187425 certs.go:257] generating profile certs ...
	I1209 04:26:53.181740 1187425 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/client.key
	I1209 04:26:53.181805 1187425 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/apiserver.key.c80eb595
	I1209 04:26:53.181848 1187425 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/proxy-client.key
	I1209 04:26:53.181859 1187425 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1209 04:26:53.181873 1187425 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1209 04:26:53.181889 1187425 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1142328/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1209 04:26:53.181899 1187425 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1142328/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1209 04:26:53.181914 1187425 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1209 04:26:53.181925 1187425 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1209 04:26:53.181943 1187425 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1209 04:26:53.181954 1187425 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1209 04:26:53.182004 1187425 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/1144231.pem (1338 bytes)
	W1209 04:26:53.182038 1187425 certs.go:480] ignoring /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/1144231_empty.pem, impossibly tiny 0 bytes
	I1209 04:26:53.182050 1187425 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca-key.pem (1679 bytes)
	I1209 04:26:53.182079 1187425 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem (1078 bytes)
	I1209 04:26:53.182105 1187425 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/cert.pem (1123 bytes)
	I1209 04:26:53.182136 1187425 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/key.pem (1675 bytes)
	I1209 04:26:53.182187 1187425 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/ssl/certs/11442312.pem (1708 bytes)
	I1209 04:26:53.182243 1187425 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/ssl/certs/11442312.pem -> /usr/share/ca-certificates/11442312.pem
	I1209 04:26:53.182260 1187425 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1209 04:26:53.182277 1187425 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/1144231.pem -> /usr/share/ca-certificates/1144231.pem
	I1209 04:26:53.182817 1187425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 04:26:53.202751 1187425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1209 04:26:53.220083 1187425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 04:26:53.237728 1187425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1209 04:26:53.255002 1187425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1209 04:26:53.271923 1187425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1209 04:26:53.289401 1187425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 04:26:53.306616 1187425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1209 04:26:53.323564 1187425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/ssl/certs/11442312.pem --> /usr/share/ca-certificates/11442312.pem (1708 bytes)
	I1209 04:26:53.340526 1187425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 04:26:53.357221 1187425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/1144231.pem --> /usr/share/ca-certificates/1144231.pem (1338 bytes)
	I1209 04:26:53.373705 1187425 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 04:26:53.386274 1187425 ssh_runner.go:195] Run: openssl version
	I1209 04:26:53.391826 1187425 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1209 04:26:53.392252 1187425 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/11442312.pem
	I1209 04:26:53.399306 1187425 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/11442312.pem /etc/ssl/certs/11442312.pem
	I1209 04:26:53.406404 1187425 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11442312.pem
	I1209 04:26:53.409862 1187425 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec  9 04:18 /usr/share/ca-certificates/11442312.pem
	I1209 04:26:53.409914 1187425 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 04:18 /usr/share/ca-certificates/11442312.pem
	I1209 04:26:53.409972 1187425 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11442312.pem
	I1209 04:26:53.450109 1187425 command_runner.go:130] > 3ec20f2e
	I1209 04:26:53.450580 1187425 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1209 04:26:53.457724 1187425 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1209 04:26:53.464857 1187425 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1209 04:26:53.472136 1187425 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 04:26:53.475789 1187425 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec  9 04:09 /usr/share/ca-certificates/minikubeCA.pem
	I1209 04:26:53.475830 1187425 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 04:09 /usr/share/ca-certificates/minikubeCA.pem
	I1209 04:26:53.475880 1187425 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 04:26:53.517012 1187425 command_runner.go:130] > b5213941
	I1209 04:26:53.517090 1187425 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1209 04:26:53.524195 1187425 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1144231.pem
	I1209 04:26:53.531059 1187425 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1144231.pem /etc/ssl/certs/1144231.pem
	I1209 04:26:53.537929 1187425 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1144231.pem
	I1209 04:26:53.541362 1187425 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec  9 04:18 /usr/share/ca-certificates/1144231.pem
	I1209 04:26:53.541587 1187425 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 04:18 /usr/share/ca-certificates/1144231.pem
	I1209 04:26:53.541670 1187425 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1144231.pem
	I1209 04:26:53.586134 1187425 command_runner.go:130] > 51391683
	I1209 04:26:53.586694 1187425 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1209 04:26:53.593775 1187425 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 04:26:53.597060 1187425 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 04:26:53.597083 1187425 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1209 04:26:53.597090 1187425 command_runner.go:130] > Device: 259,1	Inode: 1317519     Links: 1
	I1209 04:26:53.597096 1187425 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1209 04:26:53.597101 1187425 command_runner.go:130] > Access: 2025-12-09 04:22:46.557738038 +0000
	I1209 04:26:53.597107 1187425 command_runner.go:130] > Modify: 2025-12-09 04:18:42.397294101 +0000
	I1209 04:26:53.597112 1187425 command_runner.go:130] > Change: 2025-12-09 04:18:42.397294101 +0000
	I1209 04:26:53.597120 1187425 command_runner.go:130] >  Birth: 2025-12-09 04:18:42.397294101 +0000
	I1209 04:26:53.597202 1187425 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1209 04:26:53.637326 1187425 command_runner.go:130] > Certificate will not expire
	I1209 04:26:53.637892 1187425 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1209 04:26:53.678262 1187425 command_runner.go:130] > Certificate will not expire
	I1209 04:26:53.678829 1187425 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1209 04:26:53.719319 1187425 command_runner.go:130] > Certificate will not expire
	I1209 04:26:53.719397 1187425 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1209 04:26:53.760102 1187425 command_runner.go:130] > Certificate will not expire
	I1209 04:26:53.760184 1187425 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1209 04:26:53.805340 1187425 command_runner.go:130] > Certificate will not expire
	I1209 04:26:53.805854 1187425 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1209 04:26:53.846216 1187425 command_runner.go:130] > Certificate will not expire
	I1209 04:26:53.846284 1187425 kubeadm.go:401] StartCluster: {Name:functional-667319 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-667319 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 04:26:53.846701 1187425 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1209 04:26:53.846774 1187425 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 04:26:53.877891 1187425 cri.go:89] found id: ""
	I1209 04:26:53.877982 1187425 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1209 04:26:53.884657 1187425 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1209 04:26:53.884683 1187425 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1209 04:26:53.884690 1187425 command_runner.go:130] > /var/lib/minikube/etcd:
	I1209 04:26:53.885556 1187425 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1209 04:26:53.885572 1187425 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1209 04:26:53.885646 1187425 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1209 04:26:53.892789 1187425 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1209 04:26:53.893171 1187425 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-667319" does not appear in /home/jenkins/minikube-integration/22081-1142328/kubeconfig
	I1209 04:26:53.893275 1187425 kubeconfig.go:62] /home/jenkins/minikube-integration/22081-1142328/kubeconfig needs updating (will repair): [kubeconfig missing "functional-667319" cluster setting kubeconfig missing "functional-667319" context setting]
	I1209 04:26:53.893568 1187425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1142328/kubeconfig: {Name:mk0dc127429f88dc4fdfb2d110deebc58207c1b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 04:26:53.893971 1187425 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22081-1142328/kubeconfig
	I1209 04:26:53.894121 1187425 kapi.go:59] client config for functional-667319: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/client.crt", KeyFile:"/home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/client.key", CAFile:"/home/jenkins/minikube-integration/22081-1142328/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb3ec0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1209 04:26:53.894601 1187425 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1209 04:26:53.894621 1187425 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1209 04:26:53.894627 1187425 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1209 04:26:53.894636 1187425 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1209 04:26:53.894643 1187425 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1209 04:26:53.894942 1187425 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1209 04:26:53.895030 1187425 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1209 04:26:53.902229 1187425 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1209 04:26:53.902301 1187425 kubeadm.go:602] duration metric: took 16.713333ms to restartPrimaryControlPlane
	I1209 04:26:53.902316 1187425 kubeadm.go:403] duration metric: took 56.036306ms to StartCluster
	I1209 04:26:53.902333 1187425 settings.go:142] acquiring lock: {Name:mk8fa744e3d74bf8a1cbf5ac275c9f1969ad91a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 04:26:53.902398 1187425 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22081-1142328/kubeconfig
	I1209 04:26:53.902993 1187425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1142328/kubeconfig: {Name:mk0dc127429f88dc4fdfb2d110deebc58207c1b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 04:26:53.903190 1187425 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1209 04:26:53.903521 1187425 config.go:182] Loaded profile config "functional-667319": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1209 04:26:53.903568 1187425 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1209 04:26:53.903630 1187425 addons.go:70] Setting storage-provisioner=true in profile "functional-667319"
	I1209 04:26:53.903643 1187425 addons.go:239] Setting addon storage-provisioner=true in "functional-667319"
	I1209 04:26:53.903675 1187425 host.go:66] Checking if "functional-667319" exists ...
	I1209 04:26:53.904120 1187425 addons.go:70] Setting default-storageclass=true in profile "functional-667319"
	I1209 04:26:53.904144 1187425 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-667319"
	I1209 04:26:53.904441 1187425 cli_runner.go:164] Run: docker container inspect functional-667319 --format={{.State.Status}}
	I1209 04:26:53.904640 1187425 cli_runner.go:164] Run: docker container inspect functional-667319 --format={{.State.Status}}
	I1209 04:26:53.910201 1187425 out.go:179] * Verifying Kubernetes components...
	I1209 04:26:53.913884 1187425 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 04:26:53.930099 1187425 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 04:26:53.932721 1187425 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22081-1142328/kubeconfig
	I1209 04:26:53.932880 1187425 kapi.go:59] client config for functional-667319: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/client.crt", KeyFile:"/home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/client.key", CAFile:"/home/jenkins/minikube-integration/22081-1142328/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb3ec0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1209 04:26:53.933092 1187425 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:26:53.933105 1187425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1209 04:26:53.933155 1187425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-667319
	I1209 04:26:53.933672 1187425 addons.go:239] Setting addon default-storageclass=true in "functional-667319"
	I1209 04:26:53.933726 1187425 host.go:66] Checking if "functional-667319" exists ...
	I1209 04:26:53.934157 1187425 cli_runner.go:164] Run: docker container inspect functional-667319 --format={{.State.Status}}
	I1209 04:26:53.980209 1187425 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/functional-667319/id_rsa Username:docker}
	I1209 04:26:53.991515 1187425 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1209 04:26:53.991543 1187425 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1209 04:26:53.991606 1187425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-667319
	I1209 04:26:54.014988 1187425 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/functional-667319/id_rsa Username:docker}
	I1209 04:26:54.109673 1187425 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 04:26:54.172299 1187425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1209 04:26:54.172446 1187425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:26:54.932432 1187425 node_ready.go:35] waiting up to 6m0s for node "functional-667319" to be "Ready" ...
	I1209 04:26:54.932477 1187425 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:26:54.932512 1187425 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:26:54.932537 1187425 retry.go:31] will retry after 239.582285ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:26:54.932571 1187425 type.go:168] "Request Body" body=""
	I1209 04:26:54.932584 1187425 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:26:54.932596 1187425 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:26:54.932603 1187425 retry.go:31] will retry after 326.615849ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:26:54.932629 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:26:54.932908 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:26:55.173322 1187425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:26:55.233582 1187425 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:26:55.233631 1187425 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:26:55.233651 1187425 retry.go:31] will retry after 246.357107ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:26:55.259785 1187425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 04:26:55.318382 1187425 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:26:55.318469 1187425 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:26:55.318493 1187425 retry.go:31] will retry after 410.345383ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:26:55.433607 1187425 type.go:168] "Request Body" body=""
	I1209 04:26:55.433683 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:26:55.434019 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:26:55.480272 1187425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:26:55.539370 1187425 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:26:55.543073 1187425 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:26:55.543104 1187425 retry.go:31] will retry after 836.674318ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:26:55.729246 1187425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 04:26:55.790859 1187425 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:26:55.790906 1187425 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:26:55.790952 1187425 retry.go:31] will retry after 634.479833ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:26:55.933159 1187425 type.go:168] "Request Body" body=""
	I1209 04:26:55.933235 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:26:55.933592 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:26:56.380124 1187425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:26:56.425589 1187425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 04:26:56.432912 1187425 type.go:168] "Request Body" body=""
	I1209 04:26:56.433084 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:26:56.433454 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:26:56.462533 1187425 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:26:56.462616 1187425 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:26:56.462643 1187425 retry.go:31] will retry after 603.323732ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:26:56.528272 1187425 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:26:56.528318 1187425 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:26:56.528338 1187425 retry.go:31] will retry after 1.072780189s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:26:56.932753 1187425 type.go:168] "Request Body" body=""
	I1209 04:26:56.932827 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:26:56.933209 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:26:56.933265 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:26:57.066591 1187425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:26:57.132172 1187425 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:26:57.135761 1187425 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:26:57.135793 1187425 retry.go:31] will retry after 1.855495012s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:26:57.433210 1187425 type.go:168] "Request Body" body=""
	I1209 04:26:57.433286 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:26:57.433630 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:26:57.601957 1187425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 04:26:57.657995 1187425 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:26:57.658038 1187425 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:26:57.658057 1187425 retry.go:31] will retry after 1.134842328s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:26:57.933276 1187425 type.go:168] "Request Body" body=""
	I1209 04:26:57.933355 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:26:57.933644 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:26:58.433445 1187425 type.go:168] "Request Body" body=""
	I1209 04:26:58.433533 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:26:58.433853 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:26:58.793130 1187425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 04:26:58.858674 1187425 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:26:58.858714 1187425 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:26:58.858733 1187425 retry.go:31] will retry after 2.746713696s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:26:58.933078 1187425 type.go:168] "Request Body" body=""
	I1209 04:26:58.933157 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:26:58.933497 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:26:58.933557 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:26:58.991692 1187425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:26:59.049214 1187425 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:26:59.052768 1187425 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:26:59.052797 1187425 retry.go:31] will retry after 2.715253433s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:26:59.433202 1187425 type.go:168] "Request Body" body=""
	I1209 04:26:59.433383 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:26:59.433760 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:26:59.932622 1187425 type.go:168] "Request Body" body=""
	I1209 04:26:59.932706 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:26:59.933025 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:00.432716 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:00.432792 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:00.433084 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:00.932666 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:00.932767 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:00.933080 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:01.432721 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:01.432802 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:01.433155 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:27:01.433220 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:27:01.606514 1187425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 04:27:01.664108 1187425 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:27:01.667800 1187425 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:27:01.667831 1187425 retry.go:31] will retry after 3.567848129s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:27:01.769041 1187425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:27:01.828356 1187425 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:27:01.831855 1187425 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:27:01.831890 1187425 retry.go:31] will retry after 1.487712174s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:27:01.933283 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:01.933357 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:01.933696 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:02.433227 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:02.433296 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:02.433566 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:02.933365 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:02.933446 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:02.933784 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:03.320437 1187425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:27:03.380650 1187425 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:27:03.380689 1187425 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:27:03.380707 1187425 retry.go:31] will retry after 2.980491619s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:27:03.432967 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:03.433052 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:03.433335 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:27:03.433382 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:27:03.933173 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:03.933261 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:03.933564 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:04.433334 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:04.433407 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:04.433774 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:04.932608 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:04.932706 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:04.932991 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:05.236581 1187425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 04:27:05.294920 1187425 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:27:05.298256 1187425 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:27:05.298287 1187425 retry.go:31] will retry after 3.775902085s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:27:05.433544 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:05.433623 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:05.433911 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:27:05.433968 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:27:05.932633 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:05.932732 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:05.933097 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:06.361776 1187425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:27:06.423571 1187425 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:27:06.423609 1187425 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:27:06.423628 1187425 retry.go:31] will retry after 5.55631863s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:27:06.432687 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:06.432759 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:06.433064 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:06.932763 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:06.932858 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:06.933188 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:07.432720 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:07.432798 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:07.433122 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:07.932712 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:07.932788 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:07.933143 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:27:07.933270 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:27:08.432753 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:08.432826 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:08.433121 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:08.932708 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:08.932789 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:08.933114 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:09.074480 1187425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 04:27:09.131213 1187425 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:27:09.134642 1187425 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:27:09.134677 1187425 retry.go:31] will retry after 3.336397846s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:27:09.433063 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:09.433136 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:09.433477 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:09.933147 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:09.933243 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:09.933515 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:27:09.933565 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:27:10.433463 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:10.433543 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:10.433860 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:10.933720 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:10.933792 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:10.934110 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:11.432758 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:11.432831 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:11.433103 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:11.932702 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:11.932775 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:11.933127 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:11.980489 1187425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:27:12.042917 1187425 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:27:12.047245 1187425 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:27:12.047276 1187425 retry.go:31] will retry after 4.846358398s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:27:12.432667 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:12.432737 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:12.433027 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:27:12.433074 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:27:12.471387 1187425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 04:27:12.533451 1187425 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:27:12.533488 1187425 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:27:12.533508 1187425 retry.go:31] will retry after 12.396608004s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:27:12.932956 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:12.933031 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:12.933353 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:13.432721 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:13.432794 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:13.433126 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:13.932935 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:13.933007 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:13.933342 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:14.432734 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:14.432802 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:14.433056 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:27:14.433098 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:27:14.932653 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:14.932768 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:14.933061 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:15.432698 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:15.432796 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:15.433182 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:15.932668 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:15.932746 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:15.933050 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:16.432712 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:16.432788 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:16.433123 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:27:16.433176 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:27:16.894794 1187425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:27:16.933270 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:16.933350 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:16.933633 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:16.956237 1187425 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:27:16.956277 1187425 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:27:16.956299 1187425 retry.go:31] will retry after 11.708634593s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:27:17.432723 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:17.432798 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:17.433065 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:17.932740 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:17.932815 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:17.933136 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:18.432860 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:18.432932 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:18.433214 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:27:18.433267 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:27:18.932660 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:18.932728 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:18.933009 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:19.432696 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:19.432772 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:19.433147 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:19.932674 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:19.932750 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:19.933101 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:20.432907 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:20.432984 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:20.433236 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:20.932684 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:20.932760 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:20.933100 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:27:20.933152 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:27:21.432797 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:21.432871 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:21.433197 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:21.932637 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:21.932726 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:21.932993 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:22.432707 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:22.432782 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:22.433117 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:22.932841 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:22.932917 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:22.933234 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:27:22.933291 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:27:23.432668 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:23.432751 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:23.433027 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:23.932873 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:23.932948 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:23.933315 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:24.432680 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:24.432753 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:24.433071 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:24.930697 1187425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 04:27:24.933014 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:24.933088 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:24.933320 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:27:24.933369 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:27:25.005568 1187425 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:27:25.005627 1187425 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:27:25.005648 1187425 retry.go:31] will retry after 8.82909482s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:27:25.433152 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:25.433233 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:25.433532 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:25.932972 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:25.933044 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:25.933358 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:26.432756 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:26.432830 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:26.433108 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:26.932726 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:26.932803 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:26.933099 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:27.432693 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:27.432765 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:27.433082 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:27:27.433136 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:27:27.932636 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:27.932712 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:27.933033 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:28.432693 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:28.432767 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:28.433092 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:28.665515 1187425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:27:28.738878 1187425 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:27:28.745399 1187425 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:27:28.745439 1187425 retry.go:31] will retry after 17.60519501s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:27:28.932773 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:28.932863 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:28.933172 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:29.432651 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:29.432722 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:29.432984 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:29.932660 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:29.932732 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:29.933044 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:27:29.933094 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:27:30.432735 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:30.432809 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:30.433166 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:30.932654 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:30.932753 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:30.933041 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:31.432690 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:31.432771 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:31.433110 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:31.932741 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:31.932815 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:31.933152 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:27:31.933206 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:27:32.432841 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:32.432914 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:32.433177 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:32.932689 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:32.932763 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:32.933056 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:33.432759 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:33.432858 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:33.433217 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:33.835821 1187425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 04:27:33.901341 1187425 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:27:33.901393 1187425 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:27:33.901417 1187425 retry.go:31] will retry after 15.074885047s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:27:33.933523 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:33.933593 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:33.933865 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:27:33.933909 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:27:34.433650 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:34.433727 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:34.434057 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:34.933020 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:34.933101 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:34.933420 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:35.433095 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:35.433165 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:35.433445 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:35.933243 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:35.933325 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:35.933633 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:36.433407 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:36.433483 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:36.433826 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:27:36.433882 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:27:36.933227 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:36.933299 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:36.933563 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:37.433288 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:37.433419 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:37.433790 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:37.933592 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:37.933667 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:37.934021 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:38.432659 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:38.432729 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:38.433014 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:38.932721 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:38.932798 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:38.933137 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:27:38.933190 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:27:39.432858 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:39.432933 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:39.433235 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:39.932589 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:39.932669 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:39.932951 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:40.432701 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:40.432786 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:40.433116 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:40.932716 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:40.932797 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:40.933091 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:41.432779 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:41.432846 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:41.433142 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:27:41.433204 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:27:41.932681 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:41.932757 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:41.933101 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:42.432844 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:42.432919 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:42.433290 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:42.932967 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:42.933038 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:42.933352 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:43.432697 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:43.432812 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:43.433136 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:43.933033 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:43.933129 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:43.933472 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:27:43.933526 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:27:44.433250 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:44.433328 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:44.433660 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:44.933653 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:44.933724 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:44.934068 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:45.432640 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:45.432721 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:45.433020 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:45.932669 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:45.932752 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:45.933159 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:46.350898 1187425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:27:46.406595 1187425 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:27:46.409949 1187425 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:27:46.409981 1187425 retry.go:31] will retry after 30.377142014s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:27:46.433127 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:46.433197 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:46.433514 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:27:46.433571 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:27:46.933101 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:46.933177 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:46.933501 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:47.433170 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:47.433241 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:47.433507 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:47.932770 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:47.932843 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:47.933174 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:48.432886 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:48.432966 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:48.433255 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:48.932660 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:48.932727 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:48.933049 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:27:48.933100 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:27:48.977251 1187425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 04:27:49.036457 1187425 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:27:49.036497 1187425 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:27:49.036517 1187425 retry.go:31] will retry after 20.293703248s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:27:49.432687 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:49.432763 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:49.433127 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:49.932861 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:49.932933 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:49.933269 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:50.433588 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:50.433662 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:50.433924 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:50.932670 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:50.932744 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:50.933080 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:27:50.933141 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:27:51.432801 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:51.432888 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:51.433180 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:51.932877 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:51.932959 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:51.933270 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:52.432704 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:52.432782 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:52.433138 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:52.932700 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:52.932780 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:52.933082 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:53.432653 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:53.432725 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:53.433037 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:27:53.433089 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:27:53.932975 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:53.933048 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:53.933385 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:54.432710 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:54.432790 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:54.433145 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:54.932877 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:54.932952 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:54.933240 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:55.432707 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:55.432782 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:55.433125 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:27:55.433191 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:27:55.932861 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:55.932943 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:55.933270 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:56.432680 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:56.432756 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:56.433029 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:56.932719 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:56.932801 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:56.933134 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:57.432697 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:57.432773 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:57.433096 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:57.932659 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:57.932729 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:57.933033 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:27:57.933082 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:27:58.432760 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:58.432832 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:58.433186 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:58.932893 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:58.932974 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:58.933286 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:59.432662 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:59.432732 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:59.433040 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:59.932639 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:59.932712 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:59.933039 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:00.432720 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:00.432811 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:00.433208 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:28:00.433277 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:28:00.932672 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:00.932741 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:00.933005 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:01.432725 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:01.432807 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:01.433146 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:01.932896 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:01.932975 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:01.933314 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:02.432655 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:02.432728 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:02.433016 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:02.932700 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:02.932781 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:02.933135 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:28:02.933190 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:28:03.432857 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:03.432934 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:03.433286 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:03.933001 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:03.933068 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:03.933321 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:04.432726 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:04.432801 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:04.433094 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:04.932617 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:04.932698 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:04.933036 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:05.432707 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:05.432788 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:05.433060 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:28:05.433107 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:28:05.932743 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:05.932818 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:05.933156 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:06.432696 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:06.432777 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:06.433116 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:06.933451 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:06.933527 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:06.933789 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:07.433539 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:07.433615 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:07.433955 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:28:07.434011 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:28:07.933609 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:07.933684 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:07.934024 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:08.432650 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:08.432722 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:08.433067 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:08.932695 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:08.932767 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:08.933107 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:09.330698 1187425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 04:28:09.392626 1187425 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:28:09.392671 1187425 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:28:09.392765 1187425 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1209 04:28:09.432874 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:09.432952 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:09.433232 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:09.932653 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:09.932723 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:09.932991 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:28:09.933037 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:28:10.432663 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:10.432757 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:10.433041 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:10.932700 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:10.932793 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:10.933076 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:11.433216 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:11.433303 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:11.433575 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:11.933330 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:11.933412 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:11.933748 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:28:11.933801 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:28:12.433587 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:12.433670 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:12.434027 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:12.932705 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:12.932772 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:12.933018 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:13.432706 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:13.432787 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:13.433119 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:13.932999 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:13.933099 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:13.933392 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:14.432657 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:14.432736 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:14.433055 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:28:14.433109 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:28:14.932664 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:14.932748 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:14.933036 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:15.432674 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:15.432750 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:15.433098 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:15.932771 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:15.932842 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:15.933137 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:16.432714 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:16.432788 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:16.433087 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:28:16.433135 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:28:16.787371 1187425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:28:16.844461 1187425 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:28:16.844502 1187425 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:28:16.844590 1187425 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1209 04:28:16.849226 1187425 out.go:179] * Enabled addons: 
	I1209 04:28:16.852870 1187425 addons.go:530] duration metric: took 1m22.949297316s for enable addons: enabled=[]
	I1209 04:28:16.932633 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:16.932724 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:16.933045 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:17.432667 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:17.432732 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:17.433031 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:17.932701 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:17.932788 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:17.933067 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:18.432770 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:18.432843 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:18.433126 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:28:18.433178 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:28:18.932677 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:18.932752 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:18.932995 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:19.432687 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:19.432781 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:19.433100 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:19.932854 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:19.932926 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:19.933256 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:20.433039 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:20.433107 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:20.433386 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:28:20.433429 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:28:20.933212 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:20.933282 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:20.933581 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:21.433349 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:21.433421 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:21.433766 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:21.933219 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:21.933285 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:21.933576 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:22.433203 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:22.433273 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:22.433621 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:28:22.433676 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:28:22.933451 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:22.933536 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:22.933840 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:23.433217 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:23.433287 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:23.433546 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:23.933612 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:23.933689 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:23.934050 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:24.432755 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:24.432836 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:24.433161 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:24.932932 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:24.933008 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:24.933276 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:28:24.933327 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:28:25.432973 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:25.433049 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:25.433379 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:25.933099 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:25.933181 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:25.933530 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:26.433216 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:26.433283 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:26.433547 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:26.933320 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:26.933401 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:26.933762 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:28:26.933818 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:28:27.433589 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:27.433667 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:27.434004 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:27.932649 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:27.932724 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:27.933001 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:28.432685 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:28.432757 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:28.433490 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:28.933280 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:28.933359 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:28.933693 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:29.433205 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:29.433272 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:29.433545 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:28:29.433592 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:28:29.933575 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:29.933655 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:29.933979 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:30.432667 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:30.432747 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:30.433044 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:30.932681 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:30.932752 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:30.933046 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:31.432688 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:31.432771 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:31.433098 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:31.932806 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:31.932880 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:31.933203 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:28:31.933259 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:28:32.432774 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:32.432849 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:32.433097 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:32.932695 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:32.932765 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:32.933078 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:33.432694 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:33.432776 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:33.433090 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:33.932980 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:33.933051 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:33.933310 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:28:33.933359 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:28:34.432667 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:34.432745 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:34.433055 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:34.932949 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:34.933032 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:34.933356 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:35.433019 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:35.433096 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:35.433526 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:35.933390 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:35.933466 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:35.933812 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:28:35.933870 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:28:36.433595 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:36.433676 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:36.433996 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:36.932657 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:36.932727 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:36.933025 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:37.432697 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:37.432776 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:37.433068 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:37.932702 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:37.932780 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:37.933143 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:38.432646 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:38.432727 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:38.433055 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:28:38.433106 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:28:38.932743 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:38.932816 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:38.933130 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:39.432847 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:39.432919 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:39.433263 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:39.933046 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:39.933114 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:39.933379 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:40.432710 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:40.432783 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:40.433129 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:28:40.433184 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:28:40.932927 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:40.933008 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:40.933371 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:41.432689 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:41.432758 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:41.433014 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:41.932710 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:41.932795 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:41.933094 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:42.432689 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:42.432808 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:42.433149 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:28:42.433204 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:28:42.932862 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:42.932928 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:42.933226 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:43.432918 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:43.432995 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:43.433361 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:43.933127 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:43.933204 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:43.933534 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:44.433220 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:44.433305 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:44.433609 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:28:44.433661 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:28:44.933573 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:44.933652 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:44.933989 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:45.432671 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:45.432808 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:45.433150 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:45.932712 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:45.932784 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:45.933049 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:46.432736 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:46.432815 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:46.433149 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:46.932701 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:46.932779 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:46.933073 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:28:46.933121 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:28:47.432739 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:47.432826 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:47.433130 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:47.932695 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:47.932765 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:47.933076 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:48.432672 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:48.432745 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:48.433062 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:48.932639 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:48.932746 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:48.933042 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:49.432707 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:49.432782 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:49.433123 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:28:49.433177 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:28:49.932922 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:49.932995 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:49.933579 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:50.433185 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:50.433253 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:50.433517 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:50.933391 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:50.933468 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:50.933797 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:51.433551 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:51.433624 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:51.433934 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:28:51.433990 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:28:51.933180 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:51.933283 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:51.933542 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:52.433358 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:52.433437 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:52.433756 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:52.933478 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:52.933559 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:52.933900 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:53.433153 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:53.433229 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:53.433491 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:53.932707 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:53.932895 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:53.933271 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:28:53.933325 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:28:54.432706 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:54.432787 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:54.433082 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:54.933635 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:54.933745 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:54.934087 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:55.432697 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:55.432773 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:55.433110 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:55.932879 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:55.932954 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:55.933290 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:28:55.933359 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:28:56.432868 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:56.432941 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:56.433305 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:56.932702 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:56.932781 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:56.933131 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:57.432846 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:57.432925 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:57.433270 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:57.932659 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:57.932734 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:57.933027 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:58.432714 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:58.432792 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:58.433128 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:28:58.433197 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:28:58.932868 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:58.932944 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:58.933265 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:59.432668 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:59.432735 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:59.432989 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:59.932616 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:59.932707 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:59.933033 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:00.432720 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:00.432802 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:00.433159 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:29:00.433228 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:29:00.932660 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:00.932731 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:00.933053 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:01.432716 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:01.432794 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:01.433137 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:01.932702 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:01.932776 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:01.933098 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:02.432775 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:02.432843 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:02.433108 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:02.932791 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:02.932873 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:02.933214 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:29:02.933284 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:29:03.432715 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:03.432795 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:03.433113 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:03.933003 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:03.933076 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:03.933364 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:04.432671 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:04.432749 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:04.433066 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:04.932620 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:04.932694 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:04.933013 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:05.432723 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:05.432790 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:05.433120 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:29:05.433184 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:29:05.932842 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:05.932925 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:05.933228 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:06.432721 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:06.432798 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:06.433119 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:06.932673 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:06.932758 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:06.933065 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:07.432690 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:07.432761 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:07.433037 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:07.932687 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:07.932769 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:07.933108 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:29:07.933164 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:29:08.432792 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:08.432858 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:08.433117 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:08.932787 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:08.932863 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:08.933157 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:09.432693 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:09.432764 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:09.433055 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:09.932595 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:09.932672 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:09.932942 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:10.432649 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:10.432719 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:10.433035 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:29:10.433090 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:29:10.932796 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:10.932869 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:10.933200 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:11.432701 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:11.432802 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:11.433137 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:11.932772 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:11.932869 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:11.933219 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:12.432724 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:12.432804 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:12.433120 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:29:12.433175 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:29:12.932648 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:12.932732 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:12.933021 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:13.432608 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:13.432696 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:13.432999 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:13.932923 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:13.932996 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:13.933301 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:14.433005 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:14.433076 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:14.433349 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:29:14.433392 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:29:14.933312 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:14.933390 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:14.933705 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:15.433476 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:15.433554 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:15.433865 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:15.933207 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:15.933288 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:15.933572 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:16.433400 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:16.433476 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:16.433794 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:29:16.433849 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:29:16.933242 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:16.933322 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:16.933648 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:17.433213 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:17.433292 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:17.433548 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:17.933339 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:17.933416 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:17.933707 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:18.433434 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:18.433516 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:18.433853 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:29:18.433907 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:29:18.933184 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:18.933260 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:18.933504 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:19.433298 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:19.433371 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:19.433705 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:19.933618 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:19.933716 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:19.934086 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:20.432647 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:20.432722 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:20.433052 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:20.932730 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:20.932801 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:20.933102 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:29:20.933155 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:29:21.432687 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:21.432769 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:21.433095 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:21.932654 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:21.932755 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:21.933080 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:22.432734 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:22.432823 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:22.433185 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:22.932923 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:22.933014 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:22.933448 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:29:22.933504 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:29:23.433275 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:23.433350 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:23.433652 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:23.933627 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:23.933712 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:23.934033 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:24.432724 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:24.432802 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:24.433135 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:24.932905 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:24.932975 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:24.933297 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:25.432701 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:25.432775 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:25.433100 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:29:25.433159 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:29:25.932854 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:25.932931 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:25.933286 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:26.432982 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:26.433053 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:26.433514 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:26.933295 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:26.933368 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:26.933684 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:27.433488 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:27.433566 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:27.433940 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:29:27.434009 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:29:27.932662 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:27.932734 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:27.933007 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:28.432710 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:28.432783 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:28.433097 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:28.932665 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:28.932744 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:28.933074 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:29.432741 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:29.432816 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:29.433060 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:29.932619 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:29.932701 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:29.933015 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:29:29.933073 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:29:30.432700 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:30.432780 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:30.433106 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:30.932656 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:30.932728 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:30.932982 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:31.432616 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:31.432689 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:31.433009 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:31.932733 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:31.932812 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:31.933149 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:29:31.933201 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:29:32.432841 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:32.432914 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:32.433166 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:32.932705 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:32.932783 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:32.933123 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:33.432882 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:33.432957 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:33.433297 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:33.933053 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:33.933130 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:33.933467 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:29:33.933520 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:29:34.433286 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:34.433403 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:34.433746 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:34.932612 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:34.932683 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:34.933012 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:35.433252 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:35.433331 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:35.433606 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:35.933376 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:35.933452 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:35.933778 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:29:35.933826 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:29:36.433423 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:36.433498 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:36.433798 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:36.933229 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:36.933302 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:36.933556 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:37.433365 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:37.433445 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:37.433756 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:37.933531 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:37.933605 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:37.933936 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:29:37.933989 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:29:38.433192 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:38.433264 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:38.433514 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:38.933271 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:38.933344 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:38.933634 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:39.433290 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:39.433366 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:39.433709 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:39.933513 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:39.933582 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:39.933833 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:40.433616 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:40.433692 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:40.433987 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:29:40.434034 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:29:40.933257 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:40.933329 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:40.933667 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:41.433181 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:41.433267 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:41.433577 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:41.933367 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:41.933449 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:41.933797 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:42.433613 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:42.433687 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:42.434049 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:29:42.434129 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:29:42.932643 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:42.932715 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:42.932992 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:43.432683 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:43.432763 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:43.433076 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:43.933013 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:43.933091 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:43.933427 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:44.433227 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:44.433298 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:44.433550 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:44.933559 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:44.933633 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:44.933965 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:29:44.934020 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:29:45.432683 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:45.432761 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:45.433112 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:45.932794 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:45.932869 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:45.933154 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:46.432857 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:46.432932 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:46.433290 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:46.932721 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:46.932795 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:46.933140 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:47.432683 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:47.432767 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:47.433040 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:29:47.433081 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:29:47.932708 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:47.932784 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:47.933102 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:48.432814 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:48.432891 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:48.433180 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:48.932663 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:48.932737 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:48.932981 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:49.432666 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:49.432752 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:49.433069 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:29:49.433125 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:29:49.933001 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:49.933077 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:49.933427 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:50.433227 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:50.433297 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:50.433549 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:50.933296 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:50.933377 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:50.933694 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:51.433469 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:51.433544 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:51.433881 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:29:51.433935 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:29:51.933202 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:51.933269 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:51.933538 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:52.433290 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:52.433364 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:52.433675 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:52.933484 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:52.933559 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:52.933885 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:53.433239 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:53.433315 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:53.433579 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:53.933658 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:53.933737 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:53.934052 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:29:53.934108 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:29:54.432705 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:54.432789 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:54.433124 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:54.932891 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:54.932964 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:54.933236 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:55.432891 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:55.432964 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:55.433304 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:55.932875 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:55.932972 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:55.933352 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:56.433012 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:56.433094 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:56.433401 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:29:56.433443 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:29:56.932693 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:56.932777 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:56.933108 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:57.432826 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:57.432902 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:57.433221 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:57.932682 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:57.932755 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:57.933028 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:58.432718 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:58.432793 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:58.433140 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:58.932716 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:58.932793 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:58.933105 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:29:58.933168 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:29:59.432822 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:59.432892 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:59.433194 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:59.933078 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:59.933150 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:59.933487 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:00.435081 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:00.435162 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:00.435476 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:00.933357 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:00.933452 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:00.933844 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:30:00.933904 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:30:01.433579 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:01.433688 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:01.434089 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:01.932814 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:01.932889 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:01.933149 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:02.432696 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:02.432770 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:02.433104 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:02.932820 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:02.932900 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:02.933272 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:03.432924 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:03.433018 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:03.433394 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:30:03.433446 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:30:03.933058 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:03.933138 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:03.933450 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:04.433250 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:04.433365 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:04.433699 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:04.933501 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:04.933567 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:04.933823 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:05.433624 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:05.433703 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:05.434043 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:30:05.434098 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:30:05.932702 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:05.932780 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:05.933132 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:06.432812 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:06.432885 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:06.433207 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:06.932725 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:06.932803 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:06.933164 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:07.432877 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:07.432965 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:07.433368 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:07.932664 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:07.932739 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:07.933045 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:30:07.933095 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:30:08.432746 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:08.432832 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:08.433233 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:08.932787 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:08.932862 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:08.933220 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:09.432722 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:09.432792 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:09.433075 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:09.932940 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:09.933018 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:09.933383 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:30:09.933446 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:30:10.432697 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:10.432775 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:10.433080 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:10.932767 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:10.932837 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:10.933122 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:11.432711 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:11.432785 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:11.433139 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:11.932864 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:11.932943 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:11.933292 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:12.432972 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:12.433050 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:12.433319 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:30:12.433362 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:30:12.932693 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:12.932770 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:12.933130 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:13.432817 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:13.432891 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:13.433211 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:13.932954 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:13.933023 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:13.933298 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:14.432961 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:14.433039 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:14.433383 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:30:14.433439 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:30:14.933212 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:14.933286 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:14.933615 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:15.433214 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:15.433283 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:15.433537 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:15.933372 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:15.933448 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:15.933750 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:16.433525 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:16.433604 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:16.433977 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:30:16.434106 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:30:16.932772 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:16.932839 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:16.933100 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:17.432712 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:17.432793 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:17.433089 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:17.932769 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:17.932849 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:17.933173 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:18.432930 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:18.432998 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:18.433257 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:18.932950 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:18.933025 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:18.933372 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:30:18.933434 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:30:19.432919 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:19.433009 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:19.433344 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:19.933155 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:19.933227 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:19.933491 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:20.433362 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:20.433448 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:20.433795 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:20.933260 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:20.933344 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:20.933670 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:30:20.933726 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:30:21.433173 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:21.433246 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:21.433511 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:21.933299 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:21.933379 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:21.933716 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:22.433492 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:22.433570 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:22.433867 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:22.933284 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:22.933366 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:22.933654 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:23.433368 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:23.433438 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:23.433760 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:30:23.433812 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:30:23.933593 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:23.933675 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:23.933994 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:24.432661 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:24.432729 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:24.432981 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:24.932865 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:24.932938 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:24.933361 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:25.432685 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:25.432763 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:25.433120 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:25.932767 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:25.932839 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:25.933140 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:30:25.933197 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:30:26.432718 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:26.432796 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:26.433197 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:26.932889 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:26.932990 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:26.933317 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:27.432657 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:27.432727 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:27.433032 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:27.932724 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:27.932798 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:27.933135 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:28.432696 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:28.432772 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:28.433073 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:30:28.433121 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:30:28.932660 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:28.932735 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:28.933045 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:29.432714 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:29.432786 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:29.433158 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:29.932918 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:29.933000 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:29.933354 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:30.432667 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:30.432740 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:30.433039 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:30.932757 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:30.932838 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:30.933183 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:30:30.933239 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:30:31.432899 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:31.432979 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:31.433354 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:31.933050 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:31.933119 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:31.933461 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:32.433235 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:32.433315 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:32.433644 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:32.933442 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:32.933524 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:32.933825 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:30:32.933872 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:30:33.433233 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:33.433304 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:33.433591 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:33.933547 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:33.933627 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:33.933938 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:34.432679 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:34.432763 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:34.433108 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:34.933586 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:34.933660 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:34.933905 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:30:34.933945 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:30:35.432656 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:35.432733 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:35.433079 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:35.932801 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:35.932887 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:35.933268 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:36.432736 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:36.432805 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:36.433059 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:36.932731 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:36.932806 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:36.933156 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:37.432867 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:37.432942 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:37.433311 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:30:37.433368 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:30:37.932650 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:37.932720 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:37.932998 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:38.432684 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:38.432763 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:38.433055 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:38.932741 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:38.932818 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:38.933136 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:39.432679 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:39.432748 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:39.433040 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:39.932790 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:39.932869 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:39.933219 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:30:39.933279 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:30:40.432703 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:40.432777 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:40.433111 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:40.932641 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:40.932707 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:40.932957 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:41.432666 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:41.432744 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:41.433069 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:41.932847 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:41.932929 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:41.933224 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:42.432889 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:42.432958 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:42.433265 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:30:42.433309 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:30:42.932708 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:42.932789 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:42.933126 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:43.432820 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:43.432902 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:43.433230 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:43.933144 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:43.933213 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:43.933465 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:44.433223 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:44.433300 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:44.433652 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:30:44.433704 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:30:44.933589 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:44.933670 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:44.934005 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:45.432690 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:45.432762 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:45.433007 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:45.932747 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:45.932822 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:45.933163 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:46.432880 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:46.432953 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:46.433265 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:46.932662 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:46.932736 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:46.933048 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:30:46.933099 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:30:47.432707 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:47.432797 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:47.433190 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:47.932887 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:47.932971 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:47.933316 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:48.432720 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:48.432792 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:48.433100 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:48.932688 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:48.932768 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:48.933088 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:30:48.933148 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:30:49.432733 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:49.432809 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:49.433125 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:49.933000 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:49.933071 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:49.933338 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:50.433013 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:50.433086 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:50.433573 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:50.933345 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:50.933421 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:50.933709 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:30:50.933750 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:30:51.433232 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:51.433307 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:51.433630 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:51.933396 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:51.933477 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:51.933822 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:52.433445 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:52.433526 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:52.433848 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:52.933226 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:52.933298 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:52.933562 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:53.433320 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:53.433394 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:53.433724 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:30:53.433778 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:30:53.932930 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:53.933016 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:53.933473 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:54.433004 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:54.433155 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:54.433480 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:54.933346 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:54.933427 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:54.933751 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:55.433491 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:55.433571 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:55.433940 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:30:55.434008 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:30:55.933242 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:55.933327 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:55.933662 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:56.433447 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:56.433527 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:56.433865 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:56.933651 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:56.933744 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:56.934082 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:57.432792 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:57.432864 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:57.433162 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:57.932683 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:57.932753 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:57.933114 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:30:57.933173 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:30:58.432860 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:58.432937 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:58.433264 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:58.932676 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:58.932748 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:58.932997 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:59.432729 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:59.432808 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:59.433150 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:59.933081 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:59.933159 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:59.933480 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:30:59.933530 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:31:00.433233 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:00.433315 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:00.433580 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:00.933316 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:00.933394 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:00.933727 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:01.433533 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:01.433611 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:01.433948 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:01.933228 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:01.933301 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:01.933558 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:31:01.933611 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:31:02.433377 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:02.433451 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:02.433800 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:02.933601 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:02.933680 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:02.933967 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:03.432648 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:03.432726 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:03.432986 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:03.932952 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:03.933038 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:03.933395 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:04.433141 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:04.433218 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:04.433526 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:31:04.433581 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:31:04.933489 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:04.933558 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:04.933807 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:05.433605 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:05.433678 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:05.434011 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:05.932712 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:05.932791 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:05.933127 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:06.432820 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:06.432900 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:06.433220 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:06.932716 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:06.932796 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:06.933135 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:31:06.933230 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:31:07.432675 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:07.432749 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:07.433059 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:07.932657 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:07.932734 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:07.933058 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:08.432730 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:08.432806 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:08.433103 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:08.932714 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:08.932789 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:08.933128 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:09.432655 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:09.432733 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:09.432994 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:31:09.433050 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:31:09.932911 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:09.932991 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:09.933336 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:10.432707 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:10.432787 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:10.433102 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:10.932665 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:10.932738 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:10.933044 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:11.432740 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:11.432823 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:11.433154 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:31:11.433214 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:31:11.932885 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:11.932968 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:11.933325 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:12.432666 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:12.432738 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:12.433048 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:12.932727 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:12.932804 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:12.933136 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:13.432857 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:13.432936 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:13.433268 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:31:13.433318 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:31:13.933237 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:13.933317 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:13.933599 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:14.433349 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:14.433424 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:14.433772 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:14.933659 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:14.933736 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:14.934065 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:15.432670 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:15.432747 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:15.433015 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:15.932715 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:15.932801 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:15.933127 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:31:15.933183 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:31:16.432689 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:16.432770 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:16.433108 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:16.932805 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:16.932881 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:16.933165 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:17.432843 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:17.432921 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:17.433248 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:17.932974 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:17.933055 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:17.933357 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:31:17.933406 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:31:18.432810 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:18.432881 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:18.433142 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:18.932706 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:18.932778 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:18.933130 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:19.432699 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:19.432777 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:19.433122 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:19.932861 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:19.932936 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:19.933225 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:20.432912 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:20.432990 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:20.433312 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:31:20.433361 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:31:20.933001 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:20.933084 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:20.933413 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:21.432656 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:21.432769 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:21.433112 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:21.932708 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:21.932788 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:21.933119 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:22.432682 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:22.432760 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:22.433128 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:22.932684 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:22.932752 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:22.932998 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:31:22.933038 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:31:23.432679 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:23.432761 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:23.433116 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:23.932894 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:23.932973 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:23.933311 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:24.432655 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:24.432728 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:24.432998 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:24.932905 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:24.932983 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:24.933347 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:31:24.933403 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:31:25.432684 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:25.432758 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:25.433091 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:25.932662 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:25.932737 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:25.933053 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:26.432697 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:26.432782 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:26.433111 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:26.932702 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:26.932782 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:26.933089 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:27.432642 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:27.432721 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:27.432985 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:31:27.433025 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:31:27.932736 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:27.932813 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:27.933163 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:28.432736 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:28.432812 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:28.433107 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:28.932662 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:28.932730 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:28.933022 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:29.432735 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:29.432813 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:29.433101 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:31:29.433149 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:31:29.932650 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:29.932724 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:29.933059 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:30.432740 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:30.432807 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:30.433108 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:30.932710 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:30.932784 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:30.933148 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:31.432857 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:31.432937 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:31.433271 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:31:31.433325 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:31:31.932657 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:31.932730 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:31.933052 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:32.432705 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:32.432797 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:32.433154 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:32.932867 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:32.932945 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:32.933298 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:33.433002 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:33.433120 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:33.433453 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:31:33.433504 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:31:33.933313 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:33.933388 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:33.933720 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:34.432992 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:34.433115 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:34.433477 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:34.933299 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:34.933372 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:34.933678 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:35.433472 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:35.433550 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:35.433863 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:31:35.433925 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:31:35.932642 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:35.932726 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:35.933082 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:36.432718 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:36.432804 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:36.433204 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:36.932932 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:36.933006 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:36.933324 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:37.432701 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:37.432781 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:37.433098 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:37.932655 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:37.932744 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:37.933007 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:31:37.933067 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:31:38.432684 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:38.432762 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:38.433096 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:38.932741 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:38.932818 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:38.933151 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:39.432752 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:39.432821 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:39.433106 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:39.933656 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:39.933728 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:39.933989 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:31:39.934033 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:31:40.432680 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:40.432765 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:40.433112 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:40.932669 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:40.932734 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:40.933053 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:41.432690 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:41.432780 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:41.433200 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:41.932877 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:41.932953 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:41.933290 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:42.432904 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:42.432982 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:42.433302 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:31:42.433355 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:31:42.932706 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:42.932798 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:42.933087 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:43.432700 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:43.432776 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:43.433120 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:43.933050 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:43.933118 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:43.933424 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:44.432712 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:44.432790 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:44.433076 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:44.932987 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:44.933069 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:44.933451 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:31:44.933508 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:31:45.432652 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:45.432721 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:45.433020 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:45.932767 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:45.932842 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:45.933175 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:46.432686 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:46.432759 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:46.433102 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:46.932655 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:46.932727 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:46.933006 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:47.432684 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:47.432764 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:47.433090 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:31:47.433151 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:31:47.932730 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:47.932804 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:47.933206 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:48.432734 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:48.432808 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:48.433081 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:48.932704 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:48.932788 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:48.933086 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:49.432676 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:49.432758 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:49.433091 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:49.932841 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:49.932922 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:49.933214 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:31:49.933265 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:31:50.432697 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:50.432774 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:50.433083 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:50.932736 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:50.932814 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:50.933144 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:51.432737 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:51.432809 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:51.433098 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:51.932682 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:51.932765 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:51.933115 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:52.432819 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:52.432896 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:52.433241 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:31:52.433300 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:31:52.932671 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:52.932743 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:52.933011 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:53.432692 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:53.432763 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:53.433098 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:53.933090 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:53.933164 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:53.933488 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:54.432669 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:54.432748 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:54.433065 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:54.932936 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:54.933012 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:54.933364 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:31:54.933419 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:31:55.433079 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:55.433151 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:55.433486 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:55.933226 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:55.933296 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:55.933560 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:56.433428 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:56.433505 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:56.433878 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:56.932635 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:56.932709 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:56.933027 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:57.432658 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:57.432736 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:57.433044 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:31:57.433099 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:31:57.932711 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:57.932795 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:57.933103 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:58.432714 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:58.432787 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:58.433121 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:58.932651 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:58.932719 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:58.932975 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:59.432680 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:59.432759 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:59.433101 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:31:59.433157 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:31:59.932861 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:59.932939 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:59.933269 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:00.432699 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:00.432786 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:00.433188 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:00.932718 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:00.932793 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:00.933119 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:01.432702 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:01.432778 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:01.433132 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:32:01.433188 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:32:01.932953 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:01.933059 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:01.933405 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:02.433066 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:02.433138 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:02.433476 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:02.933290 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:02.933362 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:02.933678 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:03.433228 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:03.433307 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:03.433557 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:32:03.433604 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:32:03.933531 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:03.933606 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:03.933926 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:04.432634 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:04.432709 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:04.433045 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:04.932774 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:04.932840 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:04.933129 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:05.432832 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:05.432907 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:05.433248 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:05.932726 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:05.932801 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:05.933145 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:32:05.933201 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:32:06.432676 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:06.432754 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:06.433037 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:06.932744 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:06.932823 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:06.933214 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:07.432896 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:07.432968 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:07.433319 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:07.933026 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:07.933110 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:07.933393 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:32:07.933441 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:32:08.432735 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:08.432817 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:08.433284 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:08.932871 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:08.932978 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:08.933325 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:09.432676 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:09.432743 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:09.432980 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:09.932851 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:09.932929 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:09.933264 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:10.432726 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:10.432817 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:10.433167 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:32:10.433217 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:32:10.932660 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:10.932732 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:10.933027 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:11.432714 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:11.432790 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:11.433154 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:11.932874 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:11.932955 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:11.933284 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:12.432656 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:12.432727 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:12.432974 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:12.932659 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:12.932737 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:12.933062 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:32:12.933115 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:32:13.432673 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:13.432745 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:13.433062 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:13.932942 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:13.933022 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:13.933305 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:14.432666 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:14.432737 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:14.433054 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:14.932628 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:14.932702 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:14.933027 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:15.432739 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:15.432819 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:15.433087 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:32:15.433132 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:32:15.932808 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:15.932886 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:15.933232 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:16.432939 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:16.433082 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:16.433415 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:16.933224 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:16.933297 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:16.933611 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:17.433392 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:17.433466 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:17.433806 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:32:17.433861 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:32:17.933475 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:17.933557 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:17.933868 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:18.433222 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:18.433292 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:18.433591 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:18.933254 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:18.933331 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:18.933670 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:19.433471 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:19.433558 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:19.433901 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:32:19.433958 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:32:19.932590 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:19.932659 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:19.932906 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:20.432666 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:20.432742 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:20.433050 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:20.932683 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:20.932763 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:20.933127 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:21.432653 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:21.432736 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:21.433076 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:21.932743 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:21.932826 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:21.933126 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:32:21.933180 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:32:22.432753 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:22.432822 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:22.433257 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:22.932652 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:22.932719 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:22.933033 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:23.432711 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:23.432784 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:23.433133 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:23.933069 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:23.933144 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:23.933485 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:32:23.933542 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:32:24.433150 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:24.433216 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:24.433549 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:24.933587 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:24.933667 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:24.933983 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:25.432703 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:25.432780 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:25.433147 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:25.932823 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:25.932900 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:25.933226 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:26.432706 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:26.432784 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:26.433129 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:32:26.433184 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:32:26.932670 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:26.932744 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:26.933089 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:27.432659 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:27.432738 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:27.433018 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:27.932725 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:27.932802 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:27.933150 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:28.432726 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:28.432802 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:28.433119 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:28.932673 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:28.932744 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:28.933005 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:32:28.933048 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:32:29.432681 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:29.432758 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:29.433106 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:29.932862 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:29.932935 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:29.933263 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:30.432649 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:30.432724 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:30.433048 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:30.932738 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:30.932809 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:30.933151 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:32:30.933209 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:32:31.432726 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:31.432804 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:31.433153 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:31.932709 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:31.932785 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:31.933093 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:32.432864 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:32.432944 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:32.433316 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:32.932716 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:32.932791 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:32.933128 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:33.432801 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:33.432867 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:33.433124 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:32:33.433164 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:32:33.932966 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:33.933040 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:33.933351 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:34.432696 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:34.432771 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:34.433125 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:34.932830 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:34.932901 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:34.933235 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:35.432934 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:35.433024 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:35.433448 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:32:35.433504 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:32:35.933268 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:35.933342 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:35.933709 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:36.433228 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:36.433294 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:36.433588 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:36.933406 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:36.933485 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:36.933802 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:37.433562 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:37.433642 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:37.433939 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:32:37.433983 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:32:37.933183 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:37.933254 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:37.933510 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:38.433295 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:38.433365 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:38.433691 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:38.933541 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:38.933625 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:38.933982 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:39.432665 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:39.432740 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:39.432999 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:39.932625 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:39.932702 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:39.933037 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:32:39.933088 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:32:40.432600 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:40.432680 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:40.432996 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:40.932646 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:40.932715 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:40.933018 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:41.432713 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:41.432789 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:41.433153 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:41.932729 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:41.932806 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:41.933137 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:32:41.933194 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:32:42.432656 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:42.432729 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:42.433054 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:42.932710 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:42.932792 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:42.933135 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:43.432827 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:43.432907 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:43.433251 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:43.932972 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:43.933046 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:43.933297 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:32:43.933337 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:32:44.433057 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:44.433132 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:44.433467 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:44.933349 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:44.933425 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:44.933760 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:45.433200 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:45.433271 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:45.433522 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:45.933329 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:45.933403 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:45.933719 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:32:45.933777 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:32:46.433543 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:46.433636 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:46.433947 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:46.933240 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:46.933306 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:46.933602 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:47.433389 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:47.433467 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:47.433758 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:47.933580 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:47.933664 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:47.934006 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:32:47.934069 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:32:48.432643 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:48.432717 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:48.432979 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:48.932681 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:48.932755 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:48.933070 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:49.432675 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:49.432745 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:49.433131 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:49.933137 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:49.933206 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:49.933501 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:50.433255 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:50.433323 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:50.433610 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:32:50.433656 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:32:50.933294 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:50.933368 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:50.933657 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:51.433200 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:51.433282 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:51.433542 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:51.933307 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:51.933394 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:51.933715 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:52.433471 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:52.433553 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:52.433873 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:32:52.433936 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:32:52.933235 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:52.933316 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:52.933571 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:53.433369 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:53.433448 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:53.433777 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:53.933615 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:53.933693 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:53.934065 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:54.432659 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:54.432727 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:54.433303 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:54.933397 1187425 type.go:168] "Request Body" body=""
	W1209 04:32:54.933475 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): client rate limiter Wait returned an error: context deadline exceeded
	I1209 04:32:54.933495 1187425 node_ready.go:38] duration metric: took 6m0.001016343s for node "functional-667319" to be "Ready" ...
	I1209 04:32:54.936503 1187425 out.go:203] 
	W1209 04:32:54.939246 1187425 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1209 04:32:54.939264 1187425 out.go:285] * 
	W1209 04:32:54.941401 1187425 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 04:32:54.944197 1187425 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 09 04:33:02 functional-667319 containerd[5187]: time="2025-12-09T04:33:02.294041580Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 09 04:33:03 functional-667319 containerd[5187]: time="2025-12-09T04:33:03.355738360Z" level=info msg="No images store for sha256:a1f83055284ec302ac691d8677946d8b4e772fb7071d39ada1cc9184cb70814b"
	Dec 09 04:33:03 functional-667319 containerd[5187]: time="2025-12-09T04:33:03.357839975Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:latest\""
	Dec 09 04:33:03 functional-667319 containerd[5187]: time="2025-12-09T04:33:03.366240053Z" level=info msg="ImageCreate event name:\"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 09 04:33:03 functional-667319 containerd[5187]: time="2025-12-09T04:33:03.366711399Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 09 04:33:04 functional-667319 containerd[5187]: time="2025-12-09T04:33:04.314815745Z" level=info msg="No images store for sha256:a39ec332fe9389ac4cf25eee02b25033c0ceb4d88e27730c4ef90701385b405e"
	Dec 09 04:33:04 functional-667319 containerd[5187]: time="2025-12-09T04:33:04.317047408Z" level=info msg="ImageCreate event name:\"docker.io/library/minikube-local-cache-test:functional-667319\""
	Dec 09 04:33:04 functional-667319 containerd[5187]: time="2025-12-09T04:33:04.324208006Z" level=info msg="ImageCreate event name:\"sha256:f396cc1d2a2f792c8359c58d4cd23fe6d949d3fd4d68a61961f5310e98abe14b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 09 04:33:04 functional-667319 containerd[5187]: time="2025-12-09T04:33:04.324769950Z" level=info msg="ImageUpdate event name:\"docker.io/library/minikube-local-cache-test:functional-667319\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 09 04:33:05 functional-667319 containerd[5187]: time="2025-12-09T04:33:05.115841091Z" level=info msg="RemoveImage \"registry.k8s.io/pause:latest\""
	Dec 09 04:33:05 functional-667319 containerd[5187]: time="2025-12-09T04:33:05.118305576Z" level=info msg="ImageDelete event name:\"registry.k8s.io/pause:latest\""
	Dec 09 04:33:05 functional-667319 containerd[5187]: time="2025-12-09T04:33:05.121327976Z" level=info msg="ImageDelete event name:\"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a\""
	Dec 09 04:33:05 functional-667319 containerd[5187]: time="2025-12-09T04:33:05.131814810Z" level=info msg="RemoveImage \"registry.k8s.io/pause:latest\" returns successfully"
	Dec 09 04:33:06 functional-667319 containerd[5187]: time="2025-12-09T04:33:06.216059891Z" level=info msg="No images store for sha256:a1f83055284ec302ac691d8677946d8b4e772fb7071d39ada1cc9184cb70814b"
	Dec 09 04:33:06 functional-667319 containerd[5187]: time="2025-12-09T04:33:06.218615515Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:latest\""
	Dec 09 04:33:06 functional-667319 containerd[5187]: time="2025-12-09T04:33:06.225791605Z" level=info msg="ImageCreate event name:\"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 09 04:33:06 functional-667319 containerd[5187]: time="2025-12-09T04:33:06.226281937Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 09 04:33:06 functional-667319 containerd[5187]: time="2025-12-09T04:33:06.247323820Z" level=info msg="RemoveImage \"registry.k8s.io/pause:3.1\""
	Dec 09 04:33:06 functional-667319 containerd[5187]: time="2025-12-09T04:33:06.249630672Z" level=info msg="ImageDelete event name:\"registry.k8s.io/pause:3.1\""
	Dec 09 04:33:06 functional-667319 containerd[5187]: time="2025-12-09T04:33:06.251480693Z" level=info msg="ImageDelete event name:\"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5\""
	Dec 09 04:33:06 functional-667319 containerd[5187]: time="2025-12-09T04:33:06.259469246Z" level=info msg="RemoveImage \"registry.k8s.io/pause:3.1\" returns successfully"
	Dec 09 04:33:06 functional-667319 containerd[5187]: time="2025-12-09T04:33:06.401349115Z" level=info msg="No images store for sha256:3ac89611d5efd8eb74174b1f04c33b7e73b651cec35b5498caf0cfdd2efd7d48"
	Dec 09 04:33:06 functional-667319 containerd[5187]: time="2025-12-09T04:33:06.403483780Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.1\""
	Dec 09 04:33:06 functional-667319 containerd[5187]: time="2025-12-09T04:33:06.413748704Z" level=info msg="ImageCreate event name:\"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 09 04:33:06 functional-667319 containerd[5187]: time="2025-12-09T04:33:06.414146190Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:33:08.142200    9193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:33:08.142995    9193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:33:08.144682    9193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:33:08.145302    9193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:33:08.147006    9193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 9 03:13] overlayfs: idmapped layers are currently not supported
	[ +25.904254] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:14] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:16] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:18] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:19] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:30] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:32] overlayfs: idmapped layers are currently not supported
	[ +28.114653] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:33] overlayfs: idmapped layers are currently not supported
	[ +23.720849] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:34] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:35] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:36] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:37] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:38] overlayfs: idmapped layers are currently not supported
	[ +23.656275] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:39] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:57] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:58] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:00] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:02] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:03] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:05] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:06] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 04:33:08 up  7:15,  0 user,  load average: 0.42, 0.29, 0.80
	Linux functional-667319 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 09 04:33:05 functional-667319 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 09 04:33:05 functional-667319 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 823.
	Dec 09 04:33:05 functional-667319 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:33:05 functional-667319 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:33:05 functional-667319 kubelet[8981]: E1209 04:33:05.728076    8981 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 09 04:33:05 functional-667319 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 09 04:33:05 functional-667319 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 09 04:33:06 functional-667319 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 824.
	Dec 09 04:33:06 functional-667319 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:33:06 functional-667319 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:33:06 functional-667319 kubelet[9067]: E1209 04:33:06.436747    9067 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 09 04:33:06 functional-667319 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 09 04:33:06 functional-667319 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 09 04:33:07 functional-667319 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 825.
	Dec 09 04:33:07 functional-667319 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:33:07 functional-667319 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:33:07 functional-667319 kubelet[9100]: E1209 04:33:07.244978    9100 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 09 04:33:07 functional-667319 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 09 04:33:07 functional-667319 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 09 04:33:07 functional-667319 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 826.
	Dec 09 04:33:07 functional-667319 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:33:07 functional-667319 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:33:07 functional-667319 kubelet[9155]: E1209 04:33:07.990606    9155 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 09 04:33:07 functional-667319 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 09 04:33:07 functional-667319 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-667319 -n functional-667319
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-667319 -n functional-667319: exit status 2 (342.505597ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-667319" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (2.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (2.21s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-667319 get pods
functional_test.go:756: (dbg) Non-zero exit: out/kubectl --context functional-667319 get pods: exit status 1 (115.245448ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:759: failed to run kubectl directly. args "out/kubectl --context functional-667319 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-667319
helpers_test.go:243: (dbg) docker inspect functional-667319:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e5b6511799c8d5c445a335a3bd5cc9a61b518fc27ac93dad8800da366ef32129",
	        "Created": "2025-12-09T04:18:34.060957311Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1182075,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-09T04:18:34.126944158Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:e4eb91ed18a24161fce60c7cdd660144ecd5b8c5029dc2dea2c5e423c2f48ce4",
	        "ResolvConfPath": "/var/lib/docker/containers/e5b6511799c8d5c445a335a3bd5cc9a61b518fc27ac93dad8800da366ef32129/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e5b6511799c8d5c445a335a3bd5cc9a61b518fc27ac93dad8800da366ef32129/hostname",
	        "HostsPath": "/var/lib/docker/containers/e5b6511799c8d5c445a335a3bd5cc9a61b518fc27ac93dad8800da366ef32129/hosts",
	        "LogPath": "/var/lib/docker/containers/e5b6511799c8d5c445a335a3bd5cc9a61b518fc27ac93dad8800da366ef32129/e5b6511799c8d5c445a335a3bd5cc9a61b518fc27ac93dad8800da366ef32129-json.log",
	        "Name": "/functional-667319",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-667319:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-667319",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e5b6511799c8d5c445a335a3bd5cc9a61b518fc27ac93dad8800da366ef32129",
	                "LowerDir": "/var/lib/docker/overlay2/b0239006282b6e4609a1f554d0a3fb94c749a13505795c8e4078cb2db194e8e0-init/diff:/var/lib/docker/overlay2/c44bb57aa59cc265266f37f2bb6e7ec0e7d641c3b4aeaa57e6d23deec6f0d1d4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b0239006282b6e4609a1f554d0a3fb94c749a13505795c8e4078cb2db194e8e0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b0239006282b6e4609a1f554d0a3fb94c749a13505795c8e4078cb2db194e8e0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b0239006282b6e4609a1f554d0a3fb94c749a13505795c8e4078cb2db194e8e0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-667319",
	                "Source": "/var/lib/docker/volumes/functional-667319/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-667319",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-667319",
	                "name.minikube.sigs.k8s.io": "functional-667319",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7c81dabcd9e57af9bce0bc0f5619f6ef3a27af43f4b649283a5bd778ab256415",
	            "SandboxKey": "/var/run/docker/netns/7c81dabcd9e5",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33900"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33901"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33904"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33902"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33903"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-667319": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "fe:40:bd:46:56:d8",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "88b3a65de70c15005c532a44219284d4df94e474ca5b78b04514c2f932b03beb",
	                    "EndpointID": "bdef7b156f4a28c1f641ae70b42db2750bb810ae6fe93fd65325e62eb232fe91",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-667319",
	                        "e5b6511799c8"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-667319 -n functional-667319
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-667319 -n functional-667319: exit status 2 (320.503333ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-667319 logs -n 25
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                          ARGS                                                                           │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-717497 image ls --format short --alsologtostderr                                                                                             │ functional-717497 │ jenkins │ v1.37.0 │ 09 Dec 25 04:18 UTC │ 09 Dec 25 04:18 UTC │
	│ image   │ functional-717497 image ls --format yaml --alsologtostderr                                                                                              │ functional-717497 │ jenkins │ v1.37.0 │ 09 Dec 25 04:18 UTC │ 09 Dec 25 04:18 UTC │
	│ ssh     │ functional-717497 ssh pgrep buildkitd                                                                                                                   │ functional-717497 │ jenkins │ v1.37.0 │ 09 Dec 25 04:18 UTC │                     │
	│ image   │ functional-717497 image ls --format json --alsologtostderr                                                                                              │ functional-717497 │ jenkins │ v1.37.0 │ 09 Dec 25 04:18 UTC │ 09 Dec 25 04:18 UTC │
	│ image   │ functional-717497 image build -t localhost/my-image:functional-717497 testdata/build --alsologtostderr                                                  │ functional-717497 │ jenkins │ v1.37.0 │ 09 Dec 25 04:18 UTC │ 09 Dec 25 04:18 UTC │
	│ image   │ functional-717497 image ls --format table --alsologtostderr                                                                                             │ functional-717497 │ jenkins │ v1.37.0 │ 09 Dec 25 04:18 UTC │ 09 Dec 25 04:18 UTC │
	│ image   │ functional-717497 image ls                                                                                                                              │ functional-717497 │ jenkins │ v1.37.0 │ 09 Dec 25 04:18 UTC │ 09 Dec 25 04:18 UTC │
	│ delete  │ -p functional-717497                                                                                                                                    │ functional-717497 │ jenkins │ v1.37.0 │ 09 Dec 25 04:18 UTC │ 09 Dec 25 04:18 UTC │
	│ start   │ -p functional-667319 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:18 UTC │                     │
	│ start   │ -p functional-667319 --alsologtostderr -v=8                                                                                                             │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:26 UTC │                     │
	│ cache   │ functional-667319 cache add registry.k8s.io/pause:3.1                                                                                                   │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:32 UTC │ 09 Dec 25 04:33 UTC │
	│ cache   │ functional-667319 cache add registry.k8s.io/pause:3.3                                                                                                   │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:33 UTC │ 09 Dec 25 04:33 UTC │
	│ cache   │ functional-667319 cache add registry.k8s.io/pause:latest                                                                                                │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:33 UTC │ 09 Dec 25 04:33 UTC │
	│ cache   │ functional-667319 cache add minikube-local-cache-test:functional-667319                                                                                 │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:33 UTC │ 09 Dec 25 04:33 UTC │
	│ cache   │ functional-667319 cache delete minikube-local-cache-test:functional-667319                                                                              │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:33 UTC │ 09 Dec 25 04:33 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 09 Dec 25 04:33 UTC │ 09 Dec 25 04:33 UTC │
	│ cache   │ list                                                                                                                                                    │ minikube          │ jenkins │ v1.37.0 │ 09 Dec 25 04:33 UTC │ 09 Dec 25 04:33 UTC │
	│ ssh     │ functional-667319 ssh sudo crictl images                                                                                                                │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:33 UTC │ 09 Dec 25 04:33 UTC │
	│ ssh     │ functional-667319 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                      │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:33 UTC │ 09 Dec 25 04:33 UTC │
	│ ssh     │ functional-667319 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                 │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:33 UTC │                     │
	│ cache   │ functional-667319 cache reload                                                                                                                          │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:33 UTC │ 09 Dec 25 04:33 UTC │
	│ ssh     │ functional-667319 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                 │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:33 UTC │ 09 Dec 25 04:33 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 09 Dec 25 04:33 UTC │ 09 Dec 25 04:33 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                                     │ minikube          │ jenkins │ v1.37.0 │ 09 Dec 25 04:33 UTC │ 09 Dec 25 04:33 UTC │
	│ kubectl │ functional-667319 kubectl -- --context functional-667319 get pods                                                                                       │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:33 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/09 04:26:49
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 04:26:49.901158 1187425 out.go:360] Setting OutFile to fd 1 ...
	I1209 04:26:49.901350 1187425 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 04:26:49.901380 1187425 out.go:374] Setting ErrFile to fd 2...
	I1209 04:26:49.901407 1187425 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 04:26:49.902126 1187425 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-1142328/.minikube/bin
	I1209 04:26:49.902570 1187425 out.go:368] Setting JSON to false
	I1209 04:26:49.903455 1187425 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":25733,"bootTime":1765228677,"procs":156,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1209 04:26:49.903532 1187425 start.go:143] virtualization:  
	I1209 04:26:49.907035 1187425 out.go:179] * [functional-667319] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1209 04:26:49.910766 1187425 out.go:179]   - MINIKUBE_LOCATION=22081
	I1209 04:26:49.910878 1187425 notify.go:221] Checking for updates...
	I1209 04:26:49.916570 1187425 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 04:26:49.919423 1187425 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22081-1142328/kubeconfig
	I1209 04:26:49.922184 1187425 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-1142328/.minikube
	I1209 04:26:49.924947 1187425 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1209 04:26:49.927723 1187425 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 04:26:49.930999 1187425 config.go:182] Loaded profile config "functional-667319": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1209 04:26:49.931139 1187425 driver.go:422] Setting default libvirt URI to qemu:///system
	I1209 04:26:49.958230 1187425 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1209 04:26:49.958344 1187425 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 04:26:50.018007 1187425 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-09 04:26:50.006695366 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1209 04:26:50.018130 1187425 docker.go:319] overlay module found
	I1209 04:26:50.021068 1187425 out.go:179] * Using the docker driver based on existing profile
	I1209 04:26:50.024068 1187425 start.go:309] selected driver: docker
	I1209 04:26:50.024096 1187425 start.go:927] validating driver "docker" against &{Name:functional-667319 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-667319 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disa
bleCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 04:26:50.024203 1187425 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 04:26:50.024322 1187425 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 04:26:50.086853 1187425 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-09 04:26:50.07716198 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1209 04:26:50.087299 1187425 cni.go:84] Creating CNI manager for ""
	I1209 04:26:50.087371 1187425 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1209 04:26:50.087429 1187425 start.go:353] cluster config:
	{Name:functional-667319 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-667319 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPa
th: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 04:26:50.090570 1187425 out.go:179] * Starting "functional-667319" primary control-plane node in "functional-667319" cluster
	I1209 04:26:50.093453 1187425 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1209 04:26:50.098431 1187425 out.go:179] * Pulling base image v0.0.48-1765184860-22066 ...
	I1209 04:26:50.101405 1187425 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1209 04:26:50.101471 1187425 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local docker daemon
	I1209 04:26:50.101485 1187425 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1209 04:26:50.101503 1187425 cache.go:65] Caching tarball of preloaded images
	I1209 04:26:50.101600 1187425 preload.go:238] Found /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1209 04:26:50.101616 1187425 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1209 04:26:50.101720 1187425 profile.go:143] Saving config to /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/config.json ...
	I1209 04:26:50.125607 1187425 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local docker daemon, skipping pull
	I1209 04:26:50.125633 1187425 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c exists in daemon, skipping load
	I1209 04:26:50.125648 1187425 cache.go:243] Successfully downloaded all kic artifacts
	I1209 04:26:50.125680 1187425 start.go:360] acquireMachinesLock for functional-667319: {Name:mk6c31f0747796f5f8ac8ea1653d6ee60fe2a47d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 04:26:50.125839 1187425 start.go:364] duration metric: took 130.318µs to acquireMachinesLock for "functional-667319"
	I1209 04:26:50.125869 1187425 start.go:96] Skipping create...Using existing machine configuration
	I1209 04:26:50.125878 1187425 fix.go:54] fixHost starting: 
	I1209 04:26:50.126147 1187425 cli_runner.go:164] Run: docker container inspect functional-667319 --format={{.State.Status}}
	I1209 04:26:50.147043 1187425 fix.go:112] recreateIfNeeded on functional-667319: state=Running err=<nil>
	W1209 04:26:50.147073 1187425 fix.go:138] unexpected machine state, will restart: <nil>
	I1209 04:26:50.150254 1187425 out.go:252] * Updating the running docker "functional-667319" container ...
	I1209 04:26:50.150291 1187425 machine.go:94] provisionDockerMachine start ...
	I1209 04:26:50.150379 1187425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-667319
	I1209 04:26:50.167513 1187425 main.go:143] libmachine: Using SSH client type: native
	I1209 04:26:50.167851 1187425 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 33900 <nil> <nil>}
	I1209 04:26:50.167868 1187425 main.go:143] libmachine: About to run SSH command:
	hostname
	I1209 04:26:50.327552 1187425 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-667319
	
	I1209 04:26:50.327578 1187425 ubuntu.go:182] provisioning hostname "functional-667319"
	I1209 04:26:50.327642 1187425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-667319
	I1209 04:26:50.345440 1187425 main.go:143] libmachine: Using SSH client type: native
	I1209 04:26:50.345757 1187425 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 33900 <nil> <nil>}
	I1209 04:26:50.345775 1187425 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-667319 && echo "functional-667319" | sudo tee /etc/hostname
	I1209 04:26:50.504917 1187425 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-667319
	
	I1209 04:26:50.505070 1187425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-667319
	I1209 04:26:50.522734 1187425 main.go:143] libmachine: Using SSH client type: native
	I1209 04:26:50.523054 1187425 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 33900 <nil> <nil>}
	I1209 04:26:50.523070 1187425 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-667319' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-667319/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-667319' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 04:26:50.676107 1187425 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1209 04:26:50.676133 1187425 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22081-1142328/.minikube CaCertPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22081-1142328/.minikube}
	I1209 04:26:50.676165 1187425 ubuntu.go:190] setting up certificates
	I1209 04:26:50.676182 1187425 provision.go:84] configureAuth start
	I1209 04:26:50.676245 1187425 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-667319
	I1209 04:26:50.692809 1187425 provision.go:143] copyHostCerts
	I1209 04:26:50.692850 1187425 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.pem
	I1209 04:26:50.692881 1187425 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.pem, removing ...
	I1209 04:26:50.692892 1187425 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.pem
	I1209 04:26:50.692964 1187425 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.pem (1078 bytes)
	I1209 04:26:50.693060 1187425 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22081-1142328/.minikube/cert.pem
	I1209 04:26:50.693088 1187425 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-1142328/.minikube/cert.pem, removing ...
	I1209 04:26:50.693096 1187425 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-1142328/.minikube/cert.pem
	I1209 04:26:50.693122 1187425 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22081-1142328/.minikube/cert.pem (1123 bytes)
	I1209 04:26:50.693175 1187425 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22081-1142328/.minikube/key.pem
	I1209 04:26:50.693199 1187425 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-1142328/.minikube/key.pem, removing ...
	I1209 04:26:50.693206 1187425 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-1142328/.minikube/key.pem
	I1209 04:26:50.693233 1187425 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22081-1142328/.minikube/key.pem (1675 bytes)
	I1209 04:26:50.693287 1187425 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca-key.pem org=jenkins.functional-667319 san=[127.0.0.1 192.168.49.2 functional-667319 localhost minikube]
	I1209 04:26:50.808459 1187425 provision.go:177] copyRemoteCerts
	I1209 04:26:50.808521 1187425 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 04:26:50.808568 1187425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-667319
	I1209 04:26:50.825015 1187425 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/functional-667319/id_rsa Username:docker}
	I1209 04:26:50.931904 1187425 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1209 04:26:50.931970 1187425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1209 04:26:50.950373 1187425 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1209 04:26:50.950430 1187425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1209 04:26:50.967052 1187425 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1209 04:26:50.967110 1187425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1209 04:26:50.984302 1187425 provision.go:87] duration metric: took 308.098174ms to configureAuth
	I1209 04:26:50.984386 1187425 ubuntu.go:206] setting minikube options for container-runtime
	I1209 04:26:50.984596 1187425 config.go:182] Loaded profile config "functional-667319": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1209 04:26:50.984634 1187425 machine.go:97] duration metric: took 834.335015ms to provisionDockerMachine
	I1209 04:26:50.984656 1187425 start.go:293] postStartSetup for "functional-667319" (driver="docker")
	I1209 04:26:50.984680 1187425 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 04:26:50.984759 1187425 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 04:26:50.984834 1187425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-667319
	I1209 04:26:51.005808 1187425 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/functional-667319/id_rsa Username:docker}
	I1209 04:26:51.112821 1187425 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 04:26:51.116496 1187425 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1209 04:26:51.116518 1187425 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1209 04:26:51.116523 1187425 command_runner.go:130] > VERSION_ID="12"
	I1209 04:26:51.116528 1187425 command_runner.go:130] > VERSION="12 (bookworm)"
	I1209 04:26:51.116532 1187425 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1209 04:26:51.116536 1187425 command_runner.go:130] > ID=debian
	I1209 04:26:51.116540 1187425 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1209 04:26:51.116545 1187425 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1209 04:26:51.116554 1187425 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1209 04:26:51.116627 1187425 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1209 04:26:51.116648 1187425 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1209 04:26:51.116659 1187425 filesync.go:126] Scanning /home/jenkins/minikube-integration/22081-1142328/.minikube/addons for local assets ...
	I1209 04:26:51.116715 1187425 filesync.go:126] Scanning /home/jenkins/minikube-integration/22081-1142328/.minikube/files for local assets ...
	I1209 04:26:51.116799 1187425 filesync.go:149] local asset: /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/ssl/certs/11442312.pem -> 11442312.pem in /etc/ssl/certs
	I1209 04:26:51.116806 1187425 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/ssl/certs/11442312.pem -> /etc/ssl/certs/11442312.pem
	I1209 04:26:51.116882 1187425 filesync.go:149] local asset: /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/test/nested/copy/1144231/hosts -> hosts in /etc/test/nested/copy/1144231
	I1209 04:26:51.116886 1187425 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/test/nested/copy/1144231/hosts -> /etc/test/nested/copy/1144231/hosts
	I1209 04:26:51.116933 1187425 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1144231
	I1209 04:26:51.124908 1187425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/ssl/certs/11442312.pem --> /etc/ssl/certs/11442312.pem (1708 bytes)
	I1209 04:26:51.143368 1187425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/test/nested/copy/1144231/hosts --> /etc/test/nested/copy/1144231/hosts (40 bytes)
	I1209 04:26:51.161824 1187425 start.go:296] duration metric: took 177.139225ms for postStartSetup
	I1209 04:26:51.161916 1187425 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1209 04:26:51.161982 1187425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-667319
	I1209 04:26:51.181271 1187425 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/functional-667319/id_rsa Username:docker}
	I1209 04:26:51.284406 1187425 command_runner.go:130] > 12%
	I1209 04:26:51.284922 1187425 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1209 04:26:51.288619 1187425 command_runner.go:130] > 172G
	I1209 04:26:51.288953 1187425 fix.go:56] duration metric: took 1.163071262s for fixHost
	I1209 04:26:51.288968 1187425 start.go:83] releasing machines lock for "functional-667319", held for 1.163111146s
	I1209 04:26:51.289042 1187425 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-667319
	I1209 04:26:51.305835 1187425 ssh_runner.go:195] Run: cat /version.json
	I1209 04:26:51.305885 1187425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-667319
	I1209 04:26:51.305897 1187425 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 04:26:51.305950 1187425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-667319
	I1209 04:26:51.325384 1187425 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/functional-667319/id_rsa Username:docker}
	I1209 04:26:51.327293 1187425 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/functional-667319/id_rsa Username:docker}
	I1209 04:26:51.427270 1187425 command_runner.go:130] > {"iso_version": "v1.37.0-1764843329-22032", "kicbase_version": "v0.0.48-1765184860-22066", "minikube_version": "v1.37.0", "commit": "27bcd52be11288bda2f9abde063aa47b22607695"}
	I1209 04:26:51.427541 1187425 ssh_runner.go:195] Run: systemctl --version
	I1209 04:26:51.517549 1187425 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1209 04:26:51.520210 1187425 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1209 04:26:51.520243 1187425 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1209 04:26:51.520320 1187425 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1209 04:26:51.524536 1187425 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1209 04:26:51.524574 1187425 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 04:26:51.524644 1187425 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 04:26:51.532138 1187425 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1209 04:26:51.532170 1187425 start.go:496] detecting cgroup driver to use...
	I1209 04:26:51.532202 1187425 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1209 04:26:51.532264 1187425 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1209 04:26:51.547055 1187425 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1209 04:26:51.559544 1187425 docker.go:218] disabling cri-docker service (if available) ...
	I1209 04:26:51.559644 1187425 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 04:26:51.574821 1187425 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 04:26:51.587447 1187425 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 04:26:51.703845 1187425 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 04:26:51.839863 1187425 docker.go:234] disabling docker service ...
	I1209 04:26:51.839930 1187425 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 04:26:51.856255 1187425 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 04:26:51.869081 1187425 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 04:26:51.995560 1187425 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 04:26:52.125293 1187425 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 04:26:52.137749 1187425 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 04:26:52.150135 1187425 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1209 04:26:52.151507 1187425 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1209 04:26:52.160197 1187425 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1209 04:26:52.168921 1187425 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1209 04:26:52.169008 1187425 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1209 04:26:52.177592 1187425 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1209 04:26:52.185997 1187425 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1209 04:26:52.194259 1187425 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1209 04:26:52.202620 1187425 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 04:26:52.210466 1187425 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1209 04:26:52.219232 1187425 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1209 04:26:52.227579 1187425 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1209 04:26:52.236059 1187425 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 04:26:52.242619 1187425 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1209 04:26:52.243485 1187425 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 04:26:52.250890 1187425 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 04:26:52.361246 1187425 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1209 04:26:52.490552 1187425 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1209 04:26:52.490653 1187425 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1209 04:26:52.497112 1187425 command_runner.go:130] >   File: /run/containerd/containerd.sock
	I1209 04:26:52.497174 1187425 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1209 04:26:52.497206 1187425 command_runner.go:130] > Device: 0,72	Inode: 1613        Links: 1
	I1209 04:26:52.497227 1187425 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1209 04:26:52.497247 1187425 command_runner.go:130] > Access: 2025-12-09 04:26:52.442263978 +0000
	I1209 04:26:52.497281 1187425 command_runner.go:130] > Modify: 2025-12-09 04:26:52.442263978 +0000
	I1209 04:26:52.497301 1187425 command_runner.go:130] > Change: 2025-12-09 04:26:52.442263978 +0000
	I1209 04:26:52.497319 1187425 command_runner.go:130] >  Birth: -
	I1209 04:26:52.497534 1187425 start.go:564] Will wait 60s for crictl version
	I1209 04:26:52.497619 1187425 ssh_runner.go:195] Run: which crictl
	I1209 04:26:52.501257 1187425 command_runner.go:130] > /usr/local/bin/crictl
	I1209 04:26:52.502001 1187425 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1209 04:26:52.535942 1187425 command_runner.go:130] > Version:  0.1.0
	I1209 04:26:52.535964 1187425 command_runner.go:130] > RuntimeName:  containerd
	I1209 04:26:52.535970 1187425 command_runner.go:130] > RuntimeVersion:  v2.2.0
	I1209 04:26:52.535975 1187425 command_runner.go:130] > RuntimeApiVersion:  v1
	I1209 04:26:52.535985 1187425 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1209 04:26:52.536096 1187425 ssh_runner.go:195] Run: containerd --version
	I1209 04:26:52.556939 1187425 command_runner.go:130] > containerd containerd.io v2.2.0 1c4457e00facac03ce1d75f7b6777a7a851e5c41
	I1209 04:26:52.562389 1187425 ssh_runner.go:195] Run: containerd --version
	I1209 04:26:52.582187 1187425 command_runner.go:130] > containerd containerd.io v2.2.0 1c4457e00facac03ce1d75f7b6777a7a851e5c41
	I1209 04:26:52.587659 1187425 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1209 04:26:52.590705 1187425 cli_runner.go:164] Run: docker network inspect functional-667319 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1209 04:26:52.606900 1187425 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1209 04:26:52.610849 1187425 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1209 04:26:52.610974 1187425 kubeadm.go:884] updating cluster {Name:functional-667319 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-667319 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 04:26:52.611074 1187425 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1209 04:26:52.611135 1187425 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 04:26:52.634142 1187425 command_runner.go:130] > {
	I1209 04:26:52.634161 1187425 command_runner.go:130] >   "images":  [
	I1209 04:26:52.634166 1187425 command_runner.go:130] >     {
	I1209 04:26:52.634175 1187425 command_runner.go:130] >       "id":  "sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1209 04:26:52.634180 1187425 command_runner.go:130] >       "repoTags":  [
	I1209 04:26:52.634186 1187425 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1209 04:26:52.634190 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.634194 1187425 command_runner.go:130] >       "repoDigests":  [
	I1209 04:26:52.634210 1187425 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"
	I1209 04:26:52.634213 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.634218 1187425 command_runner.go:130] >       "size":  "40636774",
	I1209 04:26:52.634222 1187425 command_runner.go:130] >       "username":  "",
	I1209 04:26:52.634230 1187425 command_runner.go:130] >       "pinned":  false
	I1209 04:26:52.634233 1187425 command_runner.go:130] >     },
	I1209 04:26:52.634236 1187425 command_runner.go:130] >     {
	I1209 04:26:52.634246 1187425 command_runner.go:130] >       "id":  "sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1209 04:26:52.634251 1187425 command_runner.go:130] >       "repoTags":  [
	I1209 04:26:52.634256 1187425 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1209 04:26:52.634259 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.634263 1187425 command_runner.go:130] >       "repoDigests":  [
	I1209 04:26:52.634271 1187425 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1209 04:26:52.634274 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.634278 1187425 command_runner.go:130] >       "size":  "8034419",
	I1209 04:26:52.634282 1187425 command_runner.go:130] >       "username":  "",
	I1209 04:26:52.634286 1187425 command_runner.go:130] >       "pinned":  false
	I1209 04:26:52.634289 1187425 command_runner.go:130] >     },
	I1209 04:26:52.634292 1187425 command_runner.go:130] >     {
	I1209 04:26:52.634298 1187425 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1209 04:26:52.634302 1187425 command_runner.go:130] >       "repoTags":  [
	I1209 04:26:52.634307 1187425 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1209 04:26:52.634310 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.634317 1187425 command_runner.go:130] >       "repoDigests":  [
	I1209 04:26:52.634325 1187425 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"
	I1209 04:26:52.634328 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.634333 1187425 command_runner.go:130] >       "size":  "21168808",
	I1209 04:26:52.634337 1187425 command_runner.go:130] >       "username":  "nonroot",
	I1209 04:26:52.634341 1187425 command_runner.go:130] >       "pinned":  false
	I1209 04:26:52.634349 1187425 command_runner.go:130] >     },
	I1209 04:26:52.634355 1187425 command_runner.go:130] >     {
	I1209 04:26:52.634362 1187425 command_runner.go:130] >       "id":  "sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1209 04:26:52.634367 1187425 command_runner.go:130] >       "repoTags":  [
	I1209 04:26:52.634372 1187425 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1209 04:26:52.634375 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.634379 1187425 command_runner.go:130] >       "repoDigests":  [
	I1209 04:26:52.634387 1187425 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"
	I1209 04:26:52.634393 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.634397 1187425 command_runner.go:130] >       "size":  "21136588",
	I1209 04:26:52.634402 1187425 command_runner.go:130] >       "uid":  {
	I1209 04:26:52.634405 1187425 command_runner.go:130] >         "value":  "0"
	I1209 04:26:52.634408 1187425 command_runner.go:130] >       },
	I1209 04:26:52.634412 1187425 command_runner.go:130] >       "username":  "",
	I1209 04:26:52.634415 1187425 command_runner.go:130] >       "pinned":  false
	I1209 04:26:52.634418 1187425 command_runner.go:130] >     },
	I1209 04:26:52.634421 1187425 command_runner.go:130] >     {
	I1209 04:26:52.634428 1187425 command_runner.go:130] >       "id":  "sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1209 04:26:52.634431 1187425 command_runner.go:130] >       "repoTags":  [
	I1209 04:26:52.634437 1187425 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1209 04:26:52.634440 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.634443 1187425 command_runner.go:130] >       "repoDigests":  [
	I1209 04:26:52.634451 1187425 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58"
	I1209 04:26:52.634453 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.634457 1187425 command_runner.go:130] >       "size":  "24678359",
	I1209 04:26:52.634461 1187425 command_runner.go:130] >       "uid":  {
	I1209 04:26:52.634468 1187425 command_runner.go:130] >         "value":  "0"
	I1209 04:26:52.634471 1187425 command_runner.go:130] >       },
	I1209 04:26:52.634474 1187425 command_runner.go:130] >       "username":  "",
	I1209 04:26:52.634478 1187425 command_runner.go:130] >       "pinned":  false
	I1209 04:26:52.634480 1187425 command_runner.go:130] >     },
	I1209 04:26:52.634483 1187425 command_runner.go:130] >     {
	I1209 04:26:52.634490 1187425 command_runner.go:130] >       "id":  "sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1209 04:26:52.634493 1187425 command_runner.go:130] >       "repoTags":  [
	I1209 04:26:52.634499 1187425 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1209 04:26:52.634501 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.634505 1187425 command_runner.go:130] >       "repoDigests":  [
	I1209 04:26:52.634513 1187425 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d"
	I1209 04:26:52.634516 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.634520 1187425 command_runner.go:130] >       "size":  "20661043",
	I1209 04:26:52.634523 1187425 command_runner.go:130] >       "uid":  {
	I1209 04:26:52.634532 1187425 command_runner.go:130] >         "value":  "0"
	I1209 04:26:52.634535 1187425 command_runner.go:130] >       },
	I1209 04:26:52.634539 1187425 command_runner.go:130] >       "username":  "",
	I1209 04:26:52.634543 1187425 command_runner.go:130] >       "pinned":  false
	I1209 04:26:52.634546 1187425 command_runner.go:130] >     },
	I1209 04:26:52.634548 1187425 command_runner.go:130] >     {
	I1209 04:26:52.634555 1187425 command_runner.go:130] >       "id":  "sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1209 04:26:52.634558 1187425 command_runner.go:130] >       "repoTags":  [
	I1209 04:26:52.634563 1187425 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1209 04:26:52.634566 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.634569 1187425 command_runner.go:130] >       "repoDigests":  [
	I1209 04:26:52.634577 1187425 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1209 04:26:52.634580 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.634583 1187425 command_runner.go:130] >       "size":  "22429671",
	I1209 04:26:52.634587 1187425 command_runner.go:130] >       "username":  "",
	I1209 04:26:52.634591 1187425 command_runner.go:130] >       "pinned":  false
	I1209 04:26:52.634594 1187425 command_runner.go:130] >     },
	I1209 04:26:52.634597 1187425 command_runner.go:130] >     {
	I1209 04:26:52.634604 1187425 command_runner.go:130] >       "id":  "sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1209 04:26:52.634607 1187425 command_runner.go:130] >       "repoTags":  [
	I1209 04:26:52.634613 1187425 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1209 04:26:52.634616 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.634620 1187425 command_runner.go:130] >       "repoDigests":  [
	I1209 04:26:52.634627 1187425 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6"
	I1209 04:26:52.634630 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.634634 1187425 command_runner.go:130] >       "size":  "15391364",
	I1209 04:26:52.634638 1187425 command_runner.go:130] >       "uid":  {
	I1209 04:26:52.634641 1187425 command_runner.go:130] >         "value":  "0"
	I1209 04:26:52.634644 1187425 command_runner.go:130] >       },
	I1209 04:26:52.634649 1187425 command_runner.go:130] >       "username":  "",
	I1209 04:26:52.634653 1187425 command_runner.go:130] >       "pinned":  false
	I1209 04:26:52.634655 1187425 command_runner.go:130] >     },
	I1209 04:26:52.634659 1187425 command_runner.go:130] >     {
	I1209 04:26:52.634670 1187425 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1209 04:26:52.634674 1187425 command_runner.go:130] >       "repoTags":  [
	I1209 04:26:52.634678 1187425 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1209 04:26:52.634681 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.634685 1187425 command_runner.go:130] >       "repoDigests":  [
	I1209 04:26:52.634693 1187425 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"
	I1209 04:26:52.634695 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.634699 1187425 command_runner.go:130] >       "size":  "267939",
	I1209 04:26:52.634703 1187425 command_runner.go:130] >       "uid":  {
	I1209 04:26:52.634706 1187425 command_runner.go:130] >         "value":  "65535"
	I1209 04:26:52.634709 1187425 command_runner.go:130] >       },
	I1209 04:26:52.634713 1187425 command_runner.go:130] >       "username":  "",
	I1209 04:26:52.634717 1187425 command_runner.go:130] >       "pinned":  true
	I1209 04:26:52.634720 1187425 command_runner.go:130] >     }
	I1209 04:26:52.634723 1187425 command_runner.go:130] >   ]
	I1209 04:26:52.634726 1187425 command_runner.go:130] > }
	I1209 04:26:52.636238 1187425 containerd.go:627] all images are preloaded for containerd runtime.
	I1209 04:26:52.636265 1187425 containerd.go:534] Images already preloaded, skipping extraction
	I1209 04:26:52.636328 1187425 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 04:26:52.662300 1187425 command_runner.go:130] > {
	I1209 04:26:52.662318 1187425 command_runner.go:130] >   "images":  [
	I1209 04:26:52.662323 1187425 command_runner.go:130] >     {
	I1209 04:26:52.662332 1187425 command_runner.go:130] >       "id":  "sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1209 04:26:52.662349 1187425 command_runner.go:130] >       "repoTags":  [
	I1209 04:26:52.662355 1187425 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1209 04:26:52.662358 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.662363 1187425 command_runner.go:130] >       "repoDigests":  [
	I1209 04:26:52.662375 1187425 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"
	I1209 04:26:52.662379 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.662383 1187425 command_runner.go:130] >       "size":  "40636774",
	I1209 04:26:52.662388 1187425 command_runner.go:130] >       "username":  "",
	I1209 04:26:52.662392 1187425 command_runner.go:130] >       "pinned":  false
	I1209 04:26:52.662395 1187425 command_runner.go:130] >     },
	I1209 04:26:52.662398 1187425 command_runner.go:130] >     {
	I1209 04:26:52.662406 1187425 command_runner.go:130] >       "id":  "sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1209 04:26:52.662410 1187425 command_runner.go:130] >       "repoTags":  [
	I1209 04:26:52.662416 1187425 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1209 04:26:52.662420 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.662424 1187425 command_runner.go:130] >       "repoDigests":  [
	I1209 04:26:52.662436 1187425 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1209 04:26:52.662440 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.662444 1187425 command_runner.go:130] >       "size":  "8034419",
	I1209 04:26:52.662448 1187425 command_runner.go:130] >       "username":  "",
	I1209 04:26:52.662452 1187425 command_runner.go:130] >       "pinned":  false
	I1209 04:26:52.662460 1187425 command_runner.go:130] >     },
	I1209 04:26:52.662463 1187425 command_runner.go:130] >     {
	I1209 04:26:52.662470 1187425 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1209 04:26:52.662474 1187425 command_runner.go:130] >       "repoTags":  [
	I1209 04:26:52.662479 1187425 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1209 04:26:52.662482 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.662488 1187425 command_runner.go:130] >       "repoDigests":  [
	I1209 04:26:52.662496 1187425 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"
	I1209 04:26:52.662500 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.662504 1187425 command_runner.go:130] >       "size":  "21168808",
	I1209 04:26:52.662508 1187425 command_runner.go:130] >       "username":  "nonroot",
	I1209 04:26:52.662512 1187425 command_runner.go:130] >       "pinned":  false
	I1209 04:26:52.662515 1187425 command_runner.go:130] >     },
	I1209 04:26:52.662519 1187425 command_runner.go:130] >     {
	I1209 04:26:52.662525 1187425 command_runner.go:130] >       "id":  "sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1209 04:26:52.662529 1187425 command_runner.go:130] >       "repoTags":  [
	I1209 04:26:52.662534 1187425 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1209 04:26:52.662538 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.662541 1187425 command_runner.go:130] >       "repoDigests":  [
	I1209 04:26:52.662549 1187425 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"
	I1209 04:26:52.662552 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.662556 1187425 command_runner.go:130] >       "size":  "21136588",
	I1209 04:26:52.662561 1187425 command_runner.go:130] >       "uid":  {
	I1209 04:26:52.662565 1187425 command_runner.go:130] >         "value":  "0"
	I1209 04:26:52.662568 1187425 command_runner.go:130] >       },
	I1209 04:26:52.662572 1187425 command_runner.go:130] >       "username":  "",
	I1209 04:26:52.662576 1187425 command_runner.go:130] >       "pinned":  false
	I1209 04:26:52.662579 1187425 command_runner.go:130] >     },
	I1209 04:26:52.662585 1187425 command_runner.go:130] >     {
	I1209 04:26:52.662592 1187425 command_runner.go:130] >       "id":  "sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1209 04:26:52.662596 1187425 command_runner.go:130] >       "repoTags":  [
	I1209 04:26:52.662601 1187425 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1209 04:26:52.662605 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.662609 1187425 command_runner.go:130] >       "repoDigests":  [
	I1209 04:26:52.662617 1187425 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58"
	I1209 04:26:52.662619 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.662624 1187425 command_runner.go:130] >       "size":  "24678359",
	I1209 04:26:52.662627 1187425 command_runner.go:130] >       "uid":  {
	I1209 04:26:52.662639 1187425 command_runner.go:130] >         "value":  "0"
	I1209 04:26:52.662642 1187425 command_runner.go:130] >       },
	I1209 04:26:52.662646 1187425 command_runner.go:130] >       "username":  "",
	I1209 04:26:52.662650 1187425 command_runner.go:130] >       "pinned":  false
	I1209 04:26:52.662653 1187425 command_runner.go:130] >     },
	I1209 04:26:52.662656 1187425 command_runner.go:130] >     {
	I1209 04:26:52.662663 1187425 command_runner.go:130] >       "id":  "sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1209 04:26:52.662667 1187425 command_runner.go:130] >       "repoTags":  [
	I1209 04:26:52.662672 1187425 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1209 04:26:52.662675 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.662679 1187425 command_runner.go:130] >       "repoDigests":  [
	I1209 04:26:52.662687 1187425 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d"
	I1209 04:26:52.662690 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.662694 1187425 command_runner.go:130] >       "size":  "20661043",
	I1209 04:26:52.662697 1187425 command_runner.go:130] >       "uid":  {
	I1209 04:26:52.662701 1187425 command_runner.go:130] >         "value":  "0"
	I1209 04:26:52.662704 1187425 command_runner.go:130] >       },
	I1209 04:26:52.662707 1187425 command_runner.go:130] >       "username":  "",
	I1209 04:26:52.662712 1187425 command_runner.go:130] >       "pinned":  false
	I1209 04:26:52.662714 1187425 command_runner.go:130] >     },
	I1209 04:26:52.662717 1187425 command_runner.go:130] >     {
	I1209 04:26:52.662725 1187425 command_runner.go:130] >       "id":  "sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1209 04:26:52.662729 1187425 command_runner.go:130] >       "repoTags":  [
	I1209 04:26:52.662737 1187425 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1209 04:26:52.662741 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.662744 1187425 command_runner.go:130] >       "repoDigests":  [
	I1209 04:26:52.662752 1187425 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1209 04:26:52.662755 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.662759 1187425 command_runner.go:130] >       "size":  "22429671",
	I1209 04:26:52.662763 1187425 command_runner.go:130] >       "username":  "",
	I1209 04:26:52.662767 1187425 command_runner.go:130] >       "pinned":  false
	I1209 04:26:52.662770 1187425 command_runner.go:130] >     },
	I1209 04:26:52.662774 1187425 command_runner.go:130] >     {
	I1209 04:26:52.662781 1187425 command_runner.go:130] >       "id":  "sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1209 04:26:52.662785 1187425 command_runner.go:130] >       "repoTags":  [
	I1209 04:26:52.662791 1187425 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1209 04:26:52.662794 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.662798 1187425 command_runner.go:130] >       "repoDigests":  [
	I1209 04:26:52.662805 1187425 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6"
	I1209 04:26:52.662808 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.662813 1187425 command_runner.go:130] >       "size":  "15391364",
	I1209 04:26:52.662816 1187425 command_runner.go:130] >       "uid":  {
	I1209 04:26:52.662820 1187425 command_runner.go:130] >         "value":  "0"
	I1209 04:26:52.662823 1187425 command_runner.go:130] >       },
	I1209 04:26:52.662827 1187425 command_runner.go:130] >       "username":  "",
	I1209 04:26:52.662831 1187425 command_runner.go:130] >       "pinned":  false
	I1209 04:26:52.662834 1187425 command_runner.go:130] >     },
	I1209 04:26:52.662837 1187425 command_runner.go:130] >     {
	I1209 04:26:52.662843 1187425 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1209 04:26:52.662847 1187425 command_runner.go:130] >       "repoTags":  [
	I1209 04:26:52.662852 1187425 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1209 04:26:52.662855 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.662858 1187425 command_runner.go:130] >       "repoDigests":  [
	I1209 04:26:52.662866 1187425 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"
	I1209 04:26:52.662869 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.662873 1187425 command_runner.go:130] >       "size":  "267939",
	I1209 04:26:52.662881 1187425 command_runner.go:130] >       "uid":  {
	I1209 04:26:52.662886 1187425 command_runner.go:130] >         "value":  "65535"
	I1209 04:26:52.662890 1187425 command_runner.go:130] >       },
	I1209 04:26:52.662894 1187425 command_runner.go:130] >       "username":  "",
	I1209 04:26:52.662897 1187425 command_runner.go:130] >       "pinned":  true
	I1209 04:26:52.662900 1187425 command_runner.go:130] >     }
	I1209 04:26:52.662903 1187425 command_runner.go:130] >   ]
	I1209 04:26:52.662906 1187425 command_runner.go:130] > }
	I1209 04:26:52.665193 1187425 containerd.go:627] all images are preloaded for containerd runtime.
	I1209 04:26:52.665212 1187425 cache_images.go:86] Images are preloaded, skipping loading
	I1209 04:26:52.665219 1187425 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
	I1209 04:26:52.665322 1187425 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-667319 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-667319 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 04:26:52.665384 1187425 ssh_runner.go:195] Run: sudo crictl info
	I1209 04:26:52.686718 1187425 command_runner.go:130] > {
	I1209 04:26:52.686786 1187425 command_runner.go:130] >   "cniconfig": {
	I1209 04:26:52.686805 1187425 command_runner.go:130] >     "Networks": [
	I1209 04:26:52.686825 1187425 command_runner.go:130] >       {
	I1209 04:26:52.686864 1187425 command_runner.go:130] >         "Config": {
	I1209 04:26:52.686886 1187425 command_runner.go:130] >           "CNIVersion": "0.3.1",
	I1209 04:26:52.686905 1187425 command_runner.go:130] >           "Name": "cni-loopback",
	I1209 04:26:52.686923 1187425 command_runner.go:130] >           "Plugins": [
	I1209 04:26:52.686940 1187425 command_runner.go:130] >             {
	I1209 04:26:52.686967 1187425 command_runner.go:130] >               "Network": {
	I1209 04:26:52.686991 1187425 command_runner.go:130] >                 "ipam": {},
	I1209 04:26:52.687011 1187425 command_runner.go:130] >                 "type": "loopback"
	I1209 04:26:52.687028 1187425 command_runner.go:130] >               },
	I1209 04:26:52.687048 1187425 command_runner.go:130] >               "Source": "{\"type\":\"loopback\"}"
	I1209 04:26:52.687074 1187425 command_runner.go:130] >             }
	I1209 04:26:52.687097 1187425 command_runner.go:130] >           ],
	I1209 04:26:52.687120 1187425 command_runner.go:130] >           "Source": "{\n\"cniVersion\": \"0.3.1\",\n\"name\": \"cni-loopback\",\n\"plugins\": [{\n  \"type\": \"loopback\"\n}]\n}"
	I1209 04:26:52.687138 1187425 command_runner.go:130] >         },
	I1209 04:26:52.687160 1187425 command_runner.go:130] >         "IFName": "lo"
	I1209 04:26:52.687191 1187425 command_runner.go:130] >       }
	I1209 04:26:52.687207 1187425 command_runner.go:130] >     ],
	I1209 04:26:52.687225 1187425 command_runner.go:130] >     "PluginConfDir": "/etc/cni/net.d",
	I1209 04:26:52.687243 1187425 command_runner.go:130] >     "PluginDirs": [
	I1209 04:26:52.687272 1187425 command_runner.go:130] >       "/opt/cni/bin"
	I1209 04:26:52.687293 1187425 command_runner.go:130] >     ],
	I1209 04:26:52.687317 1187425 command_runner.go:130] >     "PluginMaxConfNum": 1,
	I1209 04:26:52.687334 1187425 command_runner.go:130] >     "Prefix": "eth"
	I1209 04:26:52.687351 1187425 command_runner.go:130] >   },
	I1209 04:26:52.687378 1187425 command_runner.go:130] >   "config": {
	I1209 04:26:52.687401 1187425 command_runner.go:130] >     "cdiSpecDirs": [
	I1209 04:26:52.687418 1187425 command_runner.go:130] >       "/etc/cdi",
	I1209 04:26:52.687438 1187425 command_runner.go:130] >       "/var/run/cdi"
	I1209 04:26:52.687457 1187425 command_runner.go:130] >     ],
	I1209 04:26:52.687483 1187425 command_runner.go:130] >     "cni": {
	I1209 04:26:52.687505 1187425 command_runner.go:130] >       "binDir": "",
	I1209 04:26:52.687560 1187425 command_runner.go:130] >       "binDirs": [
	I1209 04:26:52.687588 1187425 command_runner.go:130] >         "/opt/cni/bin"
	I1209 04:26:52.687609 1187425 command_runner.go:130] >       ],
	I1209 04:26:52.687628 1187425 command_runner.go:130] >       "confDir": "/etc/cni/net.d",
	I1209 04:26:52.687646 1187425 command_runner.go:130] >       "confTemplate": "",
	I1209 04:26:52.687665 1187425 command_runner.go:130] >       "ipPref": "",
	I1209 04:26:52.687692 1187425 command_runner.go:130] >       "maxConfNum": 1,
	I1209 04:26:52.687715 1187425 command_runner.go:130] >       "setupSerially": false,
	I1209 04:26:52.687733 1187425 command_runner.go:130] >       "useInternalLoopback": false
	I1209 04:26:52.687749 1187425 command_runner.go:130] >     },
	I1209 04:26:52.687775 1187425 command_runner.go:130] >     "containerd": {
	I1209 04:26:52.687802 1187425 command_runner.go:130] >       "defaultRuntimeName": "runc",
	I1209 04:26:52.687825 1187425 command_runner.go:130] >       "ignoreBlockIONotEnabledErrors": false,
	I1209 04:26:52.687845 1187425 command_runner.go:130] >       "ignoreRdtNotEnabledErrors": false,
	I1209 04:26:52.687861 1187425 command_runner.go:130] >       "runtimes": {
	I1209 04:26:52.687878 1187425 command_runner.go:130] >         "runc": {
	I1209 04:26:52.687905 1187425 command_runner.go:130] >           "ContainerAnnotations": null,
	I1209 04:26:52.687929 1187425 command_runner.go:130] >           "PodAnnotations": null,
	I1209 04:26:52.687948 1187425 command_runner.go:130] >           "baseRuntimeSpec": "",
	I1209 04:26:52.687965 1187425 command_runner.go:130] >           "cgroupWritable": false,
	I1209 04:26:52.687982 1187425 command_runner.go:130] >           "cniConfDir": "",
	I1209 04:26:52.688009 1187425 command_runner.go:130] >           "cniMaxConfNum": 0,
	I1209 04:26:52.688042 1187425 command_runner.go:130] >           "io_type": "",
	I1209 04:26:52.688055 1187425 command_runner.go:130] >           "options": {
	I1209 04:26:52.688060 1187425 command_runner.go:130] >             "BinaryName": "",
	I1209 04:26:52.688065 1187425 command_runner.go:130] >             "CriuImagePath": "",
	I1209 04:26:52.688070 1187425 command_runner.go:130] >             "CriuWorkPath": "",
	I1209 04:26:52.688078 1187425 command_runner.go:130] >             "IoGid": 0,
	I1209 04:26:52.688082 1187425 command_runner.go:130] >             "IoUid": 0,
	I1209 04:26:52.688086 1187425 command_runner.go:130] >             "NoNewKeyring": false,
	I1209 04:26:52.688093 1187425 command_runner.go:130] >             "Root": "",
	I1209 04:26:52.688097 1187425 command_runner.go:130] >             "ShimCgroup": "",
	I1209 04:26:52.688109 1187425 command_runner.go:130] >             "SystemdCgroup": false
	I1209 04:26:52.688113 1187425 command_runner.go:130] >           },
	I1209 04:26:52.688118 1187425 command_runner.go:130] >           "privileged_without_host_devices": false,
	I1209 04:26:52.688128 1187425 command_runner.go:130] >           "privileged_without_host_devices_all_devices_allowed": false,
	I1209 04:26:52.688138 1187425 command_runner.go:130] >           "runtimePath": "",
	I1209 04:26:52.688145 1187425 command_runner.go:130] >           "runtimeType": "io.containerd.runc.v2",
	I1209 04:26:52.688153 1187425 command_runner.go:130] >           "sandboxer": "podsandbox",
	I1209 04:26:52.688157 1187425 command_runner.go:130] >           "snapshotter": ""
	I1209 04:26:52.688161 1187425 command_runner.go:130] >         }
	I1209 04:26:52.688164 1187425 command_runner.go:130] >       }
	I1209 04:26:52.688167 1187425 command_runner.go:130] >     },
	I1209 04:26:52.688181 1187425 command_runner.go:130] >     "containerdEndpoint": "/run/containerd/containerd.sock",
	I1209 04:26:52.688190 1187425 command_runner.go:130] >     "containerdRootDir": "/var/lib/containerd",
	I1209 04:26:52.688198 1187425 command_runner.go:130] >     "device_ownership_from_security_context": false,
	I1209 04:26:52.688205 1187425 command_runner.go:130] >     "disableApparmor": false,
	I1209 04:26:52.688210 1187425 command_runner.go:130] >     "disableHugetlbController": true,
	I1209 04:26:52.688218 1187425 command_runner.go:130] >     "disableProcMount": false,
	I1209 04:26:52.688223 1187425 command_runner.go:130] >     "drainExecSyncIOTimeout": "0s",
	I1209 04:26:52.688231 1187425 command_runner.go:130] >     "enableCDI": true,
	I1209 04:26:52.688235 1187425 command_runner.go:130] >     "enableSelinux": false,
	I1209 04:26:52.688240 1187425 command_runner.go:130] >     "enableUnprivilegedICMP": true,
	I1209 04:26:52.688248 1187425 command_runner.go:130] >     "enableUnprivilegedPorts": true,
	I1209 04:26:52.688253 1187425 command_runner.go:130] >     "ignoreDeprecationWarnings": null,
	I1209 04:26:52.688259 1187425 command_runner.go:130] >     "ignoreImageDefinedVolumes": false,
	I1209 04:26:52.688269 1187425 command_runner.go:130] >     "maxContainerLogLineSize": 16384,
	I1209 04:26:52.688278 1187425 command_runner.go:130] >     "netnsMountsUnderStateDir": false,
	I1209 04:26:52.688282 1187425 command_runner.go:130] >     "restrictOOMScoreAdj": false,
	I1209 04:26:52.688293 1187425 command_runner.go:130] >     "rootDir": "/var/lib/containerd/io.containerd.grpc.v1.cri",
	I1209 04:26:52.688297 1187425 command_runner.go:130] >     "selinuxCategoryRange": 1024,
	I1209 04:26:52.688306 1187425 command_runner.go:130] >     "stateDir": "/run/containerd/io.containerd.grpc.v1.cri",
	I1209 04:26:52.688312 1187425 command_runner.go:130] >     "tolerateMissingHugetlbController": true,
	I1209 04:26:52.688320 1187425 command_runner.go:130] >     "unsetSeccompProfile": ""
	I1209 04:26:52.688323 1187425 command_runner.go:130] >   },
	I1209 04:26:52.688327 1187425 command_runner.go:130] >   "features": {
	I1209 04:26:52.688332 1187425 command_runner.go:130] >     "supplemental_groups_policy": true
	I1209 04:26:52.688337 1187425 command_runner.go:130] >   },
	I1209 04:26:52.688341 1187425 command_runner.go:130] >   "golang": "go1.24.9",
	I1209 04:26:52.688355 1187425 command_runner.go:130] >   "lastCNILoadStatus": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1209 04:26:52.688368 1187425 command_runner.go:130] >   "lastCNILoadStatus.default": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1209 04:26:52.688376 1187425 command_runner.go:130] >   "runtimeHandlers": [
	I1209 04:26:52.688379 1187425 command_runner.go:130] >     {
	I1209 04:26:52.688388 1187425 command_runner.go:130] >       "features": {
	I1209 04:26:52.688394 1187425 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1209 04:26:52.688403 1187425 command_runner.go:130] >         "user_namespaces": true
	I1209 04:26:52.688406 1187425 command_runner.go:130] >       }
	I1209 04:26:52.688409 1187425 command_runner.go:130] >     },
	I1209 04:26:52.688412 1187425 command_runner.go:130] >     {
	I1209 04:26:52.688416 1187425 command_runner.go:130] >       "features": {
	I1209 04:26:52.688423 1187425 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1209 04:26:52.688432 1187425 command_runner.go:130] >         "user_namespaces": true
	I1209 04:26:52.688435 1187425 command_runner.go:130] >       },
	I1209 04:26:52.688439 1187425 command_runner.go:130] >       "name": "runc"
	I1209 04:26:52.688446 1187425 command_runner.go:130] >     }
	I1209 04:26:52.688449 1187425 command_runner.go:130] >   ],
	I1209 04:26:52.688457 1187425 command_runner.go:130] >   "status": {
	I1209 04:26:52.688461 1187425 command_runner.go:130] >     "conditions": [
	I1209 04:26:52.688469 1187425 command_runner.go:130] >       {
	I1209 04:26:52.688476 1187425 command_runner.go:130] >         "message": "",
	I1209 04:26:52.688484 1187425 command_runner.go:130] >         "reason": "",
	I1209 04:26:52.688488 1187425 command_runner.go:130] >         "status": true,
	I1209 04:26:52.688493 1187425 command_runner.go:130] >         "type": "RuntimeReady"
	I1209 04:26:52.688497 1187425 command_runner.go:130] >       },
	I1209 04:26:52.688502 1187425 command_runner.go:130] >       {
	I1209 04:26:52.688509 1187425 command_runner.go:130] >         "message": "Network plugin returns error: cni plugin not initialized",
	I1209 04:26:52.688518 1187425 command_runner.go:130] >         "reason": "NetworkPluginNotReady",
	I1209 04:26:52.688522 1187425 command_runner.go:130] >         "status": false,
	I1209 04:26:52.688530 1187425 command_runner.go:130] >         "type": "NetworkReady"
	I1209 04:26:52.688534 1187425 command_runner.go:130] >       },
	I1209 04:26:52.688541 1187425 command_runner.go:130] >       {
	I1209 04:26:52.688568 1187425 command_runner.go:130] >         "message": "{\"io.containerd.deprecation/cgroup-v1\":\"The support for cgroup v1 is deprecated since containerd v2.2 and will be removed by no later than May 2029. Upgrade the host to use cgroup v2.\"}",
	I1209 04:26:52.688578 1187425 command_runner.go:130] >         "reason": "ContainerdHasDeprecationWarnings",
	I1209 04:26:52.688584 1187425 command_runner.go:130] >         "status": false,
	I1209 04:26:52.688590 1187425 command_runner.go:130] >         "type": "ContainerdHasNoDeprecationWarnings"
	I1209 04:26:52.688595 1187425 command_runner.go:130] >       }
	I1209 04:26:52.688598 1187425 command_runner.go:130] >     ]
	I1209 04:26:52.688606 1187425 command_runner.go:130] >   }
	I1209 04:26:52.688609 1187425 command_runner.go:130] > }
	I1209 04:26:52.690920 1187425 cni.go:84] Creating CNI manager for ""
	I1209 04:26:52.690942 1187425 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1209 04:26:52.690965 1187425 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1209 04:26:52.690987 1187425 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-667319 NodeName:functional-667319 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1209 04:26:52.691101 1187425 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-667319"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 04:26:52.691179 1187425 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1209 04:26:52.697985 1187425 command_runner.go:130] > kubeadm
	I1209 04:26:52.698006 1187425 command_runner.go:130] > kubectl
	I1209 04:26:52.698010 1187425 command_runner.go:130] > kubelet
	I1209 04:26:52.698825 1187425 binaries.go:51] Found k8s binaries, skipping transfer
	I1209 04:26:52.698896 1187425 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1209 04:26:52.706638 1187425 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1209 04:26:52.718822 1187425 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1209 04:26:52.731825 1187425 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1209 04:26:52.744962 1187425 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1209 04:26:52.748733 1187425 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1209 04:26:52.748987 1187425 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 04:26:52.855986 1187425 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 04:26:53.181367 1187425 certs.go:69] Setting up /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319 for IP: 192.168.49.2
	I1209 04:26:53.181392 1187425 certs.go:195] generating shared ca certs ...
	I1209 04:26:53.181408 1187425 certs.go:227] acquiring lock for ca certs: {Name:mk15788702f8c4e23b5aeab3f44961d296fab259 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 04:26:53.181570 1187425 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.key
	I1209 04:26:53.181618 1187425 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/proxy-client-ca.key
	I1209 04:26:53.181630 1187425 certs.go:257] generating profile certs ...
	I1209 04:26:53.181740 1187425 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/client.key
	I1209 04:26:53.181805 1187425 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/apiserver.key.c80eb595
	I1209 04:26:53.181848 1187425 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/proxy-client.key
	I1209 04:26:53.181859 1187425 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1209 04:26:53.181873 1187425 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1209 04:26:53.181889 1187425 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1142328/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1209 04:26:53.181899 1187425 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1142328/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1209 04:26:53.181914 1187425 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1209 04:26:53.181925 1187425 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1209 04:26:53.181943 1187425 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1209 04:26:53.181954 1187425 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1209 04:26:53.182004 1187425 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/1144231.pem (1338 bytes)
	W1209 04:26:53.182038 1187425 certs.go:480] ignoring /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/1144231_empty.pem, impossibly tiny 0 bytes
	I1209 04:26:53.182050 1187425 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca-key.pem (1679 bytes)
	I1209 04:26:53.182079 1187425 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem (1078 bytes)
	I1209 04:26:53.182105 1187425 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/cert.pem (1123 bytes)
	I1209 04:26:53.182136 1187425 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/key.pem (1675 bytes)
	I1209 04:26:53.182187 1187425 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/ssl/certs/11442312.pem (1708 bytes)
	I1209 04:26:53.182243 1187425 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/ssl/certs/11442312.pem -> /usr/share/ca-certificates/11442312.pem
	I1209 04:26:53.182260 1187425 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1209 04:26:53.182277 1187425 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/1144231.pem -> /usr/share/ca-certificates/1144231.pem
	I1209 04:26:53.182817 1187425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 04:26:53.202751 1187425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1209 04:26:53.220083 1187425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 04:26:53.237728 1187425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1209 04:26:53.255002 1187425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1209 04:26:53.271923 1187425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1209 04:26:53.289401 1187425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 04:26:53.306616 1187425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1209 04:26:53.323564 1187425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/ssl/certs/11442312.pem --> /usr/share/ca-certificates/11442312.pem (1708 bytes)
	I1209 04:26:53.340526 1187425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 04:26:53.357221 1187425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/1144231.pem --> /usr/share/ca-certificates/1144231.pem (1338 bytes)
	I1209 04:26:53.373705 1187425 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 04:26:53.386274 1187425 ssh_runner.go:195] Run: openssl version
	I1209 04:26:53.391826 1187425 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1209 04:26:53.392252 1187425 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/11442312.pem
	I1209 04:26:53.399306 1187425 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/11442312.pem /etc/ssl/certs/11442312.pem
	I1209 04:26:53.406404 1187425 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11442312.pem
	I1209 04:26:53.409862 1187425 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec  9 04:18 /usr/share/ca-certificates/11442312.pem
	I1209 04:26:53.409914 1187425 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 04:18 /usr/share/ca-certificates/11442312.pem
	I1209 04:26:53.409972 1187425 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11442312.pem
	I1209 04:26:53.450109 1187425 command_runner.go:130] > 3ec20f2e
	I1209 04:26:53.450580 1187425 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1209 04:26:53.457724 1187425 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1209 04:26:53.464857 1187425 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1209 04:26:53.472136 1187425 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 04:26:53.475789 1187425 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec  9 04:09 /usr/share/ca-certificates/minikubeCA.pem
	I1209 04:26:53.475830 1187425 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 04:09 /usr/share/ca-certificates/minikubeCA.pem
	I1209 04:26:53.475880 1187425 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 04:26:53.517012 1187425 command_runner.go:130] > b5213941
	I1209 04:26:53.517090 1187425 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1209 04:26:53.524195 1187425 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1144231.pem
	I1209 04:26:53.531059 1187425 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1144231.pem /etc/ssl/certs/1144231.pem
	I1209 04:26:53.537929 1187425 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1144231.pem
	I1209 04:26:53.541362 1187425 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec  9 04:18 /usr/share/ca-certificates/1144231.pem
	I1209 04:26:53.541587 1187425 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 04:18 /usr/share/ca-certificates/1144231.pem
	I1209 04:26:53.541670 1187425 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1144231.pem
	I1209 04:26:53.586134 1187425 command_runner.go:130] > 51391683
	I1209 04:26:53.586694 1187425 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1209 04:26:53.593775 1187425 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 04:26:53.597060 1187425 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 04:26:53.597083 1187425 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1209 04:26:53.597090 1187425 command_runner.go:130] > Device: 259,1	Inode: 1317519     Links: 1
	I1209 04:26:53.597096 1187425 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1209 04:26:53.597101 1187425 command_runner.go:130] > Access: 2025-12-09 04:22:46.557738038 +0000
	I1209 04:26:53.597107 1187425 command_runner.go:130] > Modify: 2025-12-09 04:18:42.397294101 +0000
	I1209 04:26:53.597112 1187425 command_runner.go:130] > Change: 2025-12-09 04:18:42.397294101 +0000
	I1209 04:26:53.597120 1187425 command_runner.go:130] >  Birth: 2025-12-09 04:18:42.397294101 +0000
	I1209 04:26:53.597202 1187425 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1209 04:26:53.637326 1187425 command_runner.go:130] > Certificate will not expire
	I1209 04:26:53.637892 1187425 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1209 04:26:53.678262 1187425 command_runner.go:130] > Certificate will not expire
	I1209 04:26:53.678829 1187425 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1209 04:26:53.719319 1187425 command_runner.go:130] > Certificate will not expire
	I1209 04:26:53.719397 1187425 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1209 04:26:53.760102 1187425 command_runner.go:130] > Certificate will not expire
	I1209 04:26:53.760184 1187425 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1209 04:26:53.805340 1187425 command_runner.go:130] > Certificate will not expire
	I1209 04:26:53.805854 1187425 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1209 04:26:53.846216 1187425 command_runner.go:130] > Certificate will not expire
	I1209 04:26:53.846284 1187425 kubeadm.go:401] StartCluster: {Name:functional-667319 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-667319 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 04:26:53.846701 1187425 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1209 04:26:53.846774 1187425 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 04:26:53.877891 1187425 cri.go:89] found id: ""
	I1209 04:26:53.877982 1187425 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1209 04:26:53.884657 1187425 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1209 04:26:53.884683 1187425 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1209 04:26:53.884690 1187425 command_runner.go:130] > /var/lib/minikube/etcd:
	I1209 04:26:53.885556 1187425 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1209 04:26:53.885572 1187425 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1209 04:26:53.885646 1187425 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1209 04:26:53.892789 1187425 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1209 04:26:53.893171 1187425 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-667319" does not appear in /home/jenkins/minikube-integration/22081-1142328/kubeconfig
	I1209 04:26:53.893275 1187425 kubeconfig.go:62] /home/jenkins/minikube-integration/22081-1142328/kubeconfig needs updating (will repair): [kubeconfig missing "functional-667319" cluster setting kubeconfig missing "functional-667319" context setting]
	I1209 04:26:53.893568 1187425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1142328/kubeconfig: {Name:mk0dc127429f88dc4fdfb2d110deebc58207c1b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 04:26:53.893971 1187425 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22081-1142328/kubeconfig
	I1209 04:26:53.894121 1187425 kapi.go:59] client config for functional-667319: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/client.crt", KeyFile:"/home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/client.key", CAFile:"/home/jenkins/minikube-integration/22081-1142328/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb3ec0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1209 04:26:53.894601 1187425 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1209 04:26:53.894621 1187425 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1209 04:26:53.894627 1187425 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1209 04:26:53.894636 1187425 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1209 04:26:53.894643 1187425 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1209 04:26:53.894942 1187425 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1209 04:26:53.895030 1187425 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1209 04:26:53.902229 1187425 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1209 04:26:53.902301 1187425 kubeadm.go:602] duration metric: took 16.713333ms to restartPrimaryControlPlane
	I1209 04:26:53.902316 1187425 kubeadm.go:403] duration metric: took 56.036306ms to StartCluster
	I1209 04:26:53.902333 1187425 settings.go:142] acquiring lock: {Name:mk8fa744e3d74bf8a1cbf5ac275c9f1969ad91a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 04:26:53.902398 1187425 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22081-1142328/kubeconfig
	I1209 04:26:53.902993 1187425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1142328/kubeconfig: {Name:mk0dc127429f88dc4fdfb2d110deebc58207c1b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 04:26:53.903190 1187425 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1209 04:26:53.903521 1187425 config.go:182] Loaded profile config "functional-667319": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1209 04:26:53.903568 1187425 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1209 04:26:53.903630 1187425 addons.go:70] Setting storage-provisioner=true in profile "functional-667319"
	I1209 04:26:53.903643 1187425 addons.go:239] Setting addon storage-provisioner=true in "functional-667319"
	I1209 04:26:53.903675 1187425 host.go:66] Checking if "functional-667319" exists ...
	I1209 04:26:53.904120 1187425 addons.go:70] Setting default-storageclass=true in profile "functional-667319"
	I1209 04:26:53.904144 1187425 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-667319"
	I1209 04:26:53.904441 1187425 cli_runner.go:164] Run: docker container inspect functional-667319 --format={{.State.Status}}
	I1209 04:26:53.904640 1187425 cli_runner.go:164] Run: docker container inspect functional-667319 --format={{.State.Status}}
	I1209 04:26:53.910201 1187425 out.go:179] * Verifying Kubernetes components...
	I1209 04:26:53.913884 1187425 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 04:26:53.930099 1187425 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 04:26:53.932721 1187425 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22081-1142328/kubeconfig
	I1209 04:26:53.932880 1187425 kapi.go:59] client config for functional-667319: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/client.crt", KeyFile:"/home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/client.key", CAFile:"/home/jenkins/minikube-integration/22081-1142328/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb3ec0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1209 04:26:53.933092 1187425 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:26:53.933105 1187425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1209 04:26:53.933155 1187425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-667319
	I1209 04:26:53.933672 1187425 addons.go:239] Setting addon default-storageclass=true in "functional-667319"
	I1209 04:26:53.933726 1187425 host.go:66] Checking if "functional-667319" exists ...
	I1209 04:26:53.934157 1187425 cli_runner.go:164] Run: docker container inspect functional-667319 --format={{.State.Status}}
	I1209 04:26:53.980209 1187425 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/functional-667319/id_rsa Username:docker}
	I1209 04:26:53.991515 1187425 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1209 04:26:53.991543 1187425 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1209 04:26:53.991606 1187425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-667319
	I1209 04:26:54.014988 1187425 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/functional-667319/id_rsa Username:docker}
	I1209 04:26:54.109673 1187425 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 04:26:54.172299 1187425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1209 04:26:54.172446 1187425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:26:54.932432 1187425 node_ready.go:35] waiting up to 6m0s for node "functional-667319" to be "Ready" ...
	I1209 04:26:54.932477 1187425 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:26:54.932512 1187425 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:26:54.932537 1187425 retry.go:31] will retry after 239.582285ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:26:54.932571 1187425 type.go:168] "Request Body" body=""
	I1209 04:26:54.932584 1187425 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:26:54.932596 1187425 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:26:54.932603 1187425 retry.go:31] will retry after 326.615849ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:26:54.932629 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:26:54.932908 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:26:55.173322 1187425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:26:55.233582 1187425 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:26:55.233631 1187425 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:26:55.233651 1187425 retry.go:31] will retry after 246.357107ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:26:55.259785 1187425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 04:26:55.318382 1187425 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:26:55.318469 1187425 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:26:55.318493 1187425 retry.go:31] will retry after 410.345383ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:26:55.433607 1187425 type.go:168] "Request Body" body=""
	I1209 04:26:55.433683 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:26:55.434019 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:26:55.480272 1187425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:26:55.539370 1187425 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:26:55.543073 1187425 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:26:55.543104 1187425 retry.go:31] will retry after 836.674318ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:26:55.729246 1187425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 04:26:55.790859 1187425 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:26:55.790906 1187425 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:26:55.790952 1187425 retry.go:31] will retry after 634.479833ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:26:55.933159 1187425 type.go:168] "Request Body" body=""
	I1209 04:26:55.933235 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:26:55.933592 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:26:56.380124 1187425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:26:56.425589 1187425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 04:26:56.432912 1187425 type.go:168] "Request Body" body=""
	I1209 04:26:56.433084 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:26:56.433454 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:26:56.462533 1187425 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:26:56.462616 1187425 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:26:56.462643 1187425 retry.go:31] will retry after 603.323732ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:26:56.528272 1187425 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:26:56.528318 1187425 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:26:56.528338 1187425 retry.go:31] will retry after 1.072780189s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:26:56.932753 1187425 type.go:168] "Request Body" body=""
	I1209 04:26:56.932827 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:26:56.933209 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:26:56.933265 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:26:57.066591 1187425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:26:57.132172 1187425 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:26:57.135761 1187425 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:26:57.135793 1187425 retry.go:31] will retry after 1.855495012s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:26:57.433210 1187425 type.go:168] "Request Body" body=""
	I1209 04:26:57.433286 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:26:57.433630 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:26:57.601957 1187425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 04:26:57.657995 1187425 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:26:57.658038 1187425 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:26:57.658057 1187425 retry.go:31] will retry after 1.134842328s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:26:57.933276 1187425 type.go:168] "Request Body" body=""
	I1209 04:26:57.933355 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:26:57.933644 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:26:58.433445 1187425 type.go:168] "Request Body" body=""
	I1209 04:26:58.433533 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:26:58.433853 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:26:58.793130 1187425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 04:26:58.858674 1187425 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:26:58.858714 1187425 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:26:58.858733 1187425 retry.go:31] will retry after 2.746713696s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:26:58.933078 1187425 type.go:168] "Request Body" body=""
	I1209 04:26:58.933157 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:26:58.933497 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:26:58.933557 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:26:58.991692 1187425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:26:59.049214 1187425 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:26:59.052768 1187425 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:26:59.052797 1187425 retry.go:31] will retry after 2.715253433s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:26:59.433202 1187425 type.go:168] "Request Body" body=""
	I1209 04:26:59.433383 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:26:59.433760 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:26:59.932622 1187425 type.go:168] "Request Body" body=""
	I1209 04:26:59.932706 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:26:59.933025 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:00.432716 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:00.432792 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:00.433084 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:00.932666 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:00.932767 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:00.933080 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:01.432721 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:01.432802 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:01.433155 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:27:01.433220 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:27:01.606514 1187425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 04:27:01.664108 1187425 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:27:01.667800 1187425 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:27:01.667831 1187425 retry.go:31] will retry after 3.567848129s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:27:01.769041 1187425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:27:01.828356 1187425 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:27:01.831855 1187425 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:27:01.831890 1187425 retry.go:31] will retry after 1.487712174s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:27:01.933283 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:01.933357 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:01.933696 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:02.433227 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:02.433296 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:02.433566 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:02.933365 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:02.933446 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:02.933784 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:03.320437 1187425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:27:03.380650 1187425 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:27:03.380689 1187425 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:27:03.380707 1187425 retry.go:31] will retry after 2.980491619s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:27:03.432967 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:03.433052 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:03.433335 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:27:03.433382 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:27:03.933173 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:03.933261 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:03.933564 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:04.433334 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:04.433407 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:04.433774 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:04.932608 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:04.932706 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:04.932991 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:05.236581 1187425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 04:27:05.294920 1187425 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:27:05.298256 1187425 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:27:05.298287 1187425 retry.go:31] will retry after 3.775902085s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:27:05.433544 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:05.433623 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:05.433911 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:27:05.433968 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:27:05.932633 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:05.932732 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:05.933097 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:06.361776 1187425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:27:06.423571 1187425 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:27:06.423609 1187425 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:27:06.423628 1187425 retry.go:31] will retry after 5.55631863s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:27:06.432687 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:06.432759 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:06.433064 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:06.932763 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:06.932858 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:06.933188 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:07.432720 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:07.432798 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:07.433122 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:07.932712 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:07.932788 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:07.933143 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:27:07.933270 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:27:08.432753 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:08.432826 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:08.433121 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:08.932708 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:08.932789 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:08.933114 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:09.074480 1187425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 04:27:09.131213 1187425 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:27:09.134642 1187425 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:27:09.134677 1187425 retry.go:31] will retry after 3.336397846s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:27:09.433063 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:09.433136 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:09.433477 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:09.933147 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:09.933243 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:09.933515 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:27:09.933565 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:27:10.433463 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:10.433543 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:10.433860 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:10.933720 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:10.933792 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:10.934110 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:11.432758 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:11.432831 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:11.433103 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:11.932702 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:11.932775 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:11.933127 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:11.980489 1187425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:27:12.042917 1187425 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:27:12.047245 1187425 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:27:12.047276 1187425 retry.go:31] will retry after 4.846358398s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:27:12.432667 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:12.432737 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:12.433027 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:27:12.433074 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:27:12.471387 1187425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 04:27:12.533451 1187425 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:27:12.533488 1187425 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:27:12.533508 1187425 retry.go:31] will retry after 12.396608004s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:27:12.932956 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:12.933031 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:12.933353 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:13.432721 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:13.432794 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:13.433126 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:13.932935 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:13.933007 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:13.933342 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:14.432734 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:14.432802 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:14.433056 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:27:14.433098 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:27:14.932653 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:14.932768 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:14.933061 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:15.432698 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:15.432796 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:15.433182 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:15.932668 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:15.932746 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:15.933050 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:16.432712 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:16.432788 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:16.433123 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:27:16.433176 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:27:16.894794 1187425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:27:16.933270 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:16.933350 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:16.933633 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:16.956237 1187425 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:27:16.956277 1187425 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:27:16.956299 1187425 retry.go:31] will retry after 11.708634593s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:27:17.432723 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:17.432798 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:17.433065 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:17.932740 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:17.932815 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:17.933136 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:18.432860 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:18.432932 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:18.433214 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:27:18.433267 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:27:18.932660 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:18.932728 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:18.933009 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:19.432696 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:19.432772 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:19.433147 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:19.932674 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:19.932750 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:19.933101 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:20.432907 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:20.432984 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:20.433236 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:20.932684 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:20.932760 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:20.933100 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:27:20.933152 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:27:21.432797 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:21.432871 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:21.433197 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:21.932637 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:21.932726 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:21.932993 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:22.432707 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:22.432782 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:22.433117 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:22.932841 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:22.932917 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:22.933234 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:27:22.933291 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:27:23.432668 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:23.432751 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:23.433027 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:23.932873 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:23.932948 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:23.933315 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:24.432680 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:24.432753 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:24.433071 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:24.930697 1187425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 04:27:24.933014 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:24.933088 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:24.933320 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:27:24.933369 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:27:25.005568 1187425 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:27:25.005627 1187425 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:27:25.005648 1187425 retry.go:31] will retry after 8.82909482s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:27:25.433152 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:25.433233 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:25.433532 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:25.932972 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:25.933044 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:25.933358 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:26.432756 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:26.432830 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:26.433108 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:26.932726 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:26.932803 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:26.933099 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:27.432693 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:27.432765 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:27.433082 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:27:27.433136 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:27:27.932636 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:27.932712 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:27.933033 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:28.432693 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:28.432767 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:28.433092 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:28.665515 1187425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:27:28.738878 1187425 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:27:28.745399 1187425 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:27:28.745439 1187425 retry.go:31] will retry after 17.60519501s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:27:28.932773 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:28.932863 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:28.933172 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:29.432651 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:29.432722 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:29.432984 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:29.932660 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:29.932732 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:29.933044 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:27:29.933094 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:27:30.432735 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:30.432809 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:30.433166 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:30.932654 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:30.932753 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:30.933041 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:31.432690 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:31.432771 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:31.433110 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:31.932741 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:31.932815 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:31.933152 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:27:31.933206 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:27:32.432841 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:32.432914 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:32.433177 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:32.932689 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:32.932763 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:32.933056 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:33.432759 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:33.432858 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:33.433217 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:33.835821 1187425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 04:27:33.901341 1187425 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:27:33.901393 1187425 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:27:33.901417 1187425 retry.go:31] will retry after 15.074885047s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:27:33.933523 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:33.933593 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:33.933865 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:27:33.933909 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:27:34.433650 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:34.433727 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:34.434057 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:34.933020 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:34.933101 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:34.933420 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:35.433095 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:35.433165 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:35.433445 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:35.933243 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:35.933325 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:35.933633 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:36.433407 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:36.433483 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:36.433826 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:27:36.433882 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:27:36.933227 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:36.933299 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:36.933563 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:37.433288 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:37.433419 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:37.433790 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:37.933592 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:37.933667 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:37.934021 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:38.432659 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:38.432729 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:38.433014 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:38.932721 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:38.932798 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:38.933137 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:27:38.933190 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:27:39.432858 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:39.432933 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:39.433235 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:39.932589 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:39.932669 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:39.932951 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:40.432701 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:40.432786 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:40.433116 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:40.932716 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:40.932797 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:40.933091 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:41.432779 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:41.432846 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:41.433142 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:27:41.433204 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:27:41.932681 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:41.932757 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:41.933101 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:42.432844 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:42.432919 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:42.433290 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:42.932967 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:42.933038 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:42.933352 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:43.432697 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:43.432812 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:43.433136 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:43.933033 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:43.933129 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:43.933472 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:27:43.933526 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:27:44.433250 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:44.433328 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:44.433660 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:44.933653 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:44.933724 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:44.934068 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:45.432640 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:45.432721 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:45.433020 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:45.932669 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:45.932752 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:45.933159 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:46.350898 1187425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:27:46.406595 1187425 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:27:46.409949 1187425 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:27:46.409981 1187425 retry.go:31] will retry after 30.377142014s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:27:46.433127 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:46.433197 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:46.433514 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:27:46.433571 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:27:46.933101 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:46.933177 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:46.933501 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:47.433170 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:47.433241 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:47.433507 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:47.932770 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:47.932843 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:47.933174 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:48.432886 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:48.432966 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:48.433255 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:48.932660 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:48.932727 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:48.933049 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:27:48.933100 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:27:48.977251 1187425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 04:27:49.036457 1187425 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:27:49.036497 1187425 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:27:49.036517 1187425 retry.go:31] will retry after 20.293703248s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:27:49.432687 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:49.432763 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:49.433127 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:49.932861 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:49.932933 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:49.933269 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:50.433588 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:50.433662 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:50.433924 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:50.932670 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:50.932744 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:50.933080 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:27:50.933141 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:27:51.432801 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:51.432888 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:51.433180 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:51.932877 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:51.932959 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:51.933270 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:52.432704 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:52.432782 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:52.433138 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:52.932700 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:52.932780 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:52.933082 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:53.432653 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:53.432725 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:53.433037 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:27:53.433089 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:27:53.932975 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:53.933048 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:53.933385 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:54.432710 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:54.432790 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:54.433145 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:54.932877 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:54.932952 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:54.933240 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:55.432707 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:55.432782 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:55.433125 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:27:55.433191 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:27:55.932861 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:55.932943 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:55.933270 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:56.432680 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:56.432756 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:56.433029 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:56.932719 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:56.932801 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:56.933134 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:57.432697 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:57.432773 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:57.433096 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:57.932659 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:57.932729 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:57.933033 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:27:57.933082 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:27:58.432760 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:58.432832 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:58.433186 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:58.932893 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:58.932974 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:58.933286 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:59.432662 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:59.432732 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:59.433040 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:27:59.932639 1187425 type.go:168] "Request Body" body=""
	I1209 04:27:59.932712 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:27:59.933039 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:00.432720 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:00.432811 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:00.433208 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:28:00.433277 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:28:00.932672 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:00.932741 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:00.933005 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:01.432725 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:01.432807 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:01.433146 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:01.932896 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:01.932975 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:01.933314 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:02.432655 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:02.432728 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:02.433016 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:02.932700 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:02.932781 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:02.933135 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:28:02.933190 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:28:03.432857 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:03.432934 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:03.433286 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:03.933001 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:03.933068 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:03.933321 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:04.432726 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:04.432801 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:04.433094 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:04.932617 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:04.932698 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:04.933036 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:05.432707 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:05.432788 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:05.433060 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:28:05.433107 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:28:05.932743 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:05.932818 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:05.933156 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:06.432696 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:06.432777 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:06.433116 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:06.933451 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:06.933527 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:06.933789 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:07.433539 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:07.433615 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:07.433955 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:28:07.434011 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:28:07.933609 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:07.933684 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:07.934024 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:08.432650 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:08.432722 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:08.433067 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:08.932695 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:08.932767 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:08.933107 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:09.330698 1187425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 04:28:09.392626 1187425 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:28:09.392671 1187425 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:28:09.392765 1187425 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1209 04:28:09.432874 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:09.432952 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:09.433232 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:09.932653 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:09.932723 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:09.932991 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:28:09.933037 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:28:10.432663 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:10.432757 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:10.433041 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:10.932700 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:10.932793 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:10.933076 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:11.433216 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:11.433303 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:11.433575 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:11.933330 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:11.933412 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:11.933748 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:28:11.933801 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:28:12.433587 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:12.433670 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:12.434027 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:12.932705 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:12.932772 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:12.933018 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:13.432706 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:13.432787 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:13.433119 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:13.932999 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:13.933099 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:13.933392 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:14.432657 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:14.432736 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:14.433055 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:28:14.433109 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:28:14.932664 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:14.932748 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:14.933036 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:15.432674 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:15.432750 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:15.433098 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:15.932771 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:15.932842 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:15.933137 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:16.432714 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:16.432788 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:16.433087 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:28:16.433135 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:28:16.787371 1187425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:28:16.844461 1187425 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:28:16.844502 1187425 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:28:16.844590 1187425 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1209 04:28:16.849226 1187425 out.go:179] * Enabled addons: 
	I1209 04:28:16.852870 1187425 addons.go:530] duration metric: took 1m22.949297316s for enable addons: enabled=[]
	I1209 04:28:16.932633 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:16.932724 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:16.933045 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:17.432667 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:17.432732 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:17.433031 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:17.932701 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:17.932788 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:17.933067 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:18.432770 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:18.432843 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:18.433126 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:28:18.433178 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:28:18.932677 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:18.932752 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:18.932995 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:19.432687 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:19.432781 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:19.433100 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:19.932854 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:19.932926 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:19.933256 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:20.433039 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:20.433107 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:20.433386 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:28:20.433429 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:28:20.933212 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:20.933282 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:20.933581 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:21.433349 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:21.433421 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:21.433766 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:21.933219 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:21.933285 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:21.933576 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:22.433203 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:22.433273 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:22.433621 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:28:22.433676 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:28:22.933451 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:22.933536 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:22.933840 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:23.433217 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:23.433287 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:23.433546 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:23.933612 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:23.933689 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:23.934050 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:24.432755 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:24.432836 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:24.433161 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:24.932932 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:24.933008 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:24.933276 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:28:24.933327 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:28:25.432973 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:25.433049 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:25.433379 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:25.933099 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:25.933181 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:25.933530 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:26.433216 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:26.433283 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:26.433547 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:26.933320 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:26.933401 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:26.933762 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:28:26.933818 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:28:27.433589 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:27.433667 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:27.434004 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:27.932649 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:27.932724 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:27.933001 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:28.432685 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:28.432757 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:28.433490 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:28.933280 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:28.933359 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:28.933693 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:29.433205 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:29.433272 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:29.433545 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:28:29.433592 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:28:29.933575 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:29.933655 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:29.933979 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:30.432667 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:30.432747 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:30.433044 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:30.932681 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:30.932752 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:30.933046 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:31.432688 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:31.432771 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:31.433098 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:31.932806 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:31.932880 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:31.933203 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:28:31.933259 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:28:32.432774 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:32.432849 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:32.433097 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:32.932695 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:32.932765 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:32.933078 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:33.432694 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:33.432776 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:33.433090 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:33.932980 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:33.933051 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:33.933310 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:28:33.933359 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:28:34.432667 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:34.432745 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:34.433055 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:34.932949 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:34.933032 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:34.933356 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:35.433019 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:35.433096 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:35.433526 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:35.933390 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:35.933466 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:35.933812 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:28:35.933870 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:28:36.433595 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:36.433676 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:36.433996 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:36.932657 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:36.932727 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:36.933025 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:37.432697 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:37.432776 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:37.433068 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:37.932702 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:37.932780 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:37.933143 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:38.432646 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:38.432727 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:38.433055 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:28:38.433106 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:28:38.932743 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:38.932816 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:38.933130 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:39.432847 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:39.432919 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:39.433263 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:39.933046 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:39.933114 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:39.933379 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:40.432710 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:40.432783 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:40.433129 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:28:40.433184 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:28:40.932927 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:40.933008 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:40.933371 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:41.432689 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:41.432758 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:41.433014 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:41.932710 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:41.932795 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:41.933094 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:42.432689 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:42.432808 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:42.433149 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:28:42.433204 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:28:42.932862 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:42.932928 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:42.933226 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:43.432918 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:43.432995 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:43.433361 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:43.933127 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:43.933204 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:43.933534 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:44.433220 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:44.433305 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:44.433609 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:28:44.433661 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:28:44.933573 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:44.933652 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:44.933989 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:45.432671 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:45.432808 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:45.433150 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:45.932712 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:45.932784 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:45.933049 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:46.432736 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:46.432815 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:46.433149 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:46.932701 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:46.932779 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:46.933073 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:28:46.933121 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:28:47.432739 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:47.432826 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:47.433130 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:47.932695 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:47.932765 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:47.933076 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:48.432672 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:48.432745 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:48.433062 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:48.932639 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:48.932746 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:48.933042 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:49.432707 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:49.432782 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:49.433123 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:28:49.433177 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:28:49.932922 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:49.932995 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:49.933579 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:50.433185 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:50.433253 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:50.433517 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:50.933391 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:50.933468 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:50.933797 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:51.433551 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:51.433624 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:51.433934 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:28:51.433990 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:28:51.933180 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:51.933283 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:51.933542 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:52.433358 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:52.433437 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:52.433756 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:52.933478 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:52.933559 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:52.933900 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:53.433153 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:53.433229 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:53.433491 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:53.932707 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:53.932895 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:53.933271 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:28:53.933325 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:28:54.432706 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:54.432787 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:54.433082 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:54.933635 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:54.933745 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:54.934087 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:55.432697 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:55.432773 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:55.433110 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:55.932879 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:55.932954 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:55.933290 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:28:55.933359 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:28:56.432868 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:56.432941 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:56.433305 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:56.932702 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:56.932781 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:56.933131 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:57.432846 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:57.432925 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:57.433270 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:57.932659 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:57.932734 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:57.933027 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:58.432714 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:58.432792 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:58.433128 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:28:58.433197 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:28:58.932868 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:58.932944 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:58.933265 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:59.432668 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:59.432735 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:59.432989 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:28:59.932616 1187425 type.go:168] "Request Body" body=""
	I1209 04:28:59.932707 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:28:59.933033 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:00.432720 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:00.432802 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:00.433159 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:29:00.433228 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:29:00.932660 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:00.932731 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:00.933053 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:01.432716 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:01.432794 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:01.433137 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:01.932702 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:01.932776 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:01.933098 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:02.432775 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:02.432843 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:02.433108 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:02.932791 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:02.932873 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:02.933214 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:29:02.933284 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:29:03.432715 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:03.432795 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:03.433113 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:03.933003 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:03.933076 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:03.933364 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:04.432671 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:04.432749 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:04.433066 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:04.932620 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:04.932694 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:04.933013 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:05.432723 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:05.432790 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:05.433120 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:29:05.433184 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:29:05.932842 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:05.932925 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:05.933228 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:06.432721 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:06.432798 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:06.433119 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:06.932673 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:06.932758 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:06.933065 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:07.432690 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:07.432761 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:07.433037 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:07.932687 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:07.932769 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:07.933108 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:29:07.933164 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:29:08.432792 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:08.432858 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:08.433117 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:08.932787 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:08.932863 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:08.933157 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:09.432693 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:09.432764 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:09.433055 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:09.932595 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:09.932672 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:09.932942 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:10.432649 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:10.432719 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:10.433035 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:29:10.433090 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:29:10.932796 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:10.932869 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:10.933200 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:11.432701 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:11.432802 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:11.433137 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:11.932772 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:11.932869 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:11.933219 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:12.432724 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:12.432804 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:12.433120 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:29:12.433175 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:29:12.932648 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:12.932732 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:12.933021 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:13.432608 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:13.432696 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:13.432999 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:13.932923 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:13.932996 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:13.933301 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:14.433005 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:14.433076 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:14.433349 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:29:14.433392 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:29:14.933312 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:14.933390 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:14.933705 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:15.433476 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:15.433554 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:15.433865 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:15.933207 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:15.933288 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:15.933572 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:16.433400 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:16.433476 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:16.433794 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:29:16.433849 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:29:16.933242 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:16.933322 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:16.933648 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:17.433213 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:17.433292 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:17.433548 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:17.933339 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:17.933416 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:17.933707 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:18.433434 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:18.433516 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:18.433853 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:29:18.433907 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:29:18.933184 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:18.933260 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:18.933504 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:19.433298 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:19.433371 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:19.433705 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:19.933618 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:19.933716 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:19.934086 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:20.432647 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:20.432722 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:20.433052 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:20.932730 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:20.932801 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:20.933102 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:29:20.933155 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:29:21.432687 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:21.432769 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:21.433095 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:21.932654 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:21.932755 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:21.933080 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:22.432734 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:22.432823 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:22.433185 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:22.932923 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:22.933014 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:22.933448 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:29:22.933504 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:29:23.433275 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:23.433350 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:23.433652 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:23.933627 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:23.933712 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:23.934033 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:24.432724 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:24.432802 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:24.433135 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:24.932905 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:24.932975 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:24.933297 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:25.432701 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:25.432775 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:25.433100 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:29:25.433159 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:29:25.932854 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:25.932931 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:25.933286 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:26.432982 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:26.433053 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:26.433514 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:26.933295 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:26.933368 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:26.933684 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:27.433488 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:27.433566 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:27.433940 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:29:27.434009 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:29:27.932662 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:27.932734 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:27.933007 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:28.432710 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:28.432783 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:28.433097 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:28.932665 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:28.932744 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:28.933074 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:29.432741 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:29.432816 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:29.433060 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:29.932619 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:29.932701 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:29.933015 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:29:29.933073 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:29:30.432700 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:30.432780 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:30.433106 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:30.932656 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:30.932728 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:30.932982 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:31.432616 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:31.432689 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:31.433009 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:31.932733 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:31.932812 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:31.933149 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:29:31.933201 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:29:32.432841 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:32.432914 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:32.433166 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:32.932705 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:32.932783 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:32.933123 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:33.432882 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:33.432957 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:33.433297 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:33.933053 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:33.933130 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:33.933467 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:29:33.933520 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:29:34.433286 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:34.433403 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:34.433746 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:34.932612 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:34.932683 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:34.933012 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:35.433252 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:35.433331 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:35.433606 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:35.933376 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:35.933452 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:35.933778 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:29:35.933826 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:29:36.433423 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:36.433498 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:36.433798 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:36.933229 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:36.933302 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:36.933556 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:37.433365 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:37.433445 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:37.433756 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:37.933531 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:37.933605 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:37.933936 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:29:37.933989 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:29:38.433192 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:38.433264 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:38.433514 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:38.933271 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:38.933344 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:38.933634 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:39.433290 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:39.433366 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:39.433709 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:39.933513 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:39.933582 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:39.933833 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:40.433616 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:40.433692 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:40.433987 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:29:40.434034 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:29:40.933257 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:40.933329 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:40.933667 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:41.433181 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:41.433267 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:41.433577 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:41.933367 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:41.933449 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:41.933797 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:42.433613 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:42.433687 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:42.434049 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:29:42.434129 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:29:42.932643 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:42.932715 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:42.932992 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:43.432683 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:43.432763 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:43.433076 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:43.933013 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:43.933091 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:43.933427 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:44.433227 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:44.433298 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:44.433550 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:44.933559 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:44.933633 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:44.933965 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:29:44.934020 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:29:45.432683 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:45.432761 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:45.433112 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:45.932794 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:45.932869 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:45.933154 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:46.432857 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:46.432932 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:46.433290 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:46.932721 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:46.932795 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:46.933140 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:47.432683 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:47.432767 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:47.433040 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:29:47.433081 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:29:47.932708 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:47.932784 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:47.933102 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:48.432814 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:48.432891 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:48.433180 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:48.932663 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:48.932737 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:48.932981 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:49.432666 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:49.432752 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:49.433069 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:29:49.433125 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:29:49.933001 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:49.933077 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:49.933427 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:50.433227 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:50.433297 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:50.433549 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:50.933296 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:50.933377 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:50.933694 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:51.433469 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:51.433544 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:51.433881 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:29:51.433935 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:29:51.933202 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:51.933269 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:51.933538 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:52.433290 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:52.433364 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:52.433675 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:52.933484 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:52.933559 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:52.933885 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:53.433239 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:53.433315 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:53.433579 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:53.933658 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:53.933737 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:53.934052 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:29:53.934108 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:29:54.432705 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:54.432789 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:54.433124 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:54.932891 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:54.932964 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:54.933236 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:55.432891 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:55.432964 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:55.433304 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:55.932875 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:55.932972 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:55.933352 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:56.433012 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:56.433094 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:56.433401 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:29:56.433443 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:29:56.932693 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:56.932777 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:56.933108 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:57.432826 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:57.432902 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:57.433221 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:57.932682 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:57.932755 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:57.933028 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:58.432718 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:58.432793 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:58.433140 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:58.932716 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:58.932793 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:58.933105 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:29:58.933168 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:29:59.432822 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:59.432892 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:59.433194 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:29:59.933078 1187425 type.go:168] "Request Body" body=""
	I1209 04:29:59.933150 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:29:59.933487 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:00.435081 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:00.435162 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:00.435476 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:00.933357 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:00.933452 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:00.933844 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:30:00.933904 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:30:01.433579 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:01.433688 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:01.434089 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:01.932814 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:01.932889 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:01.933149 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:02.432696 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:02.432770 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:02.433104 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:02.932820 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:02.932900 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:02.933272 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:03.432924 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:03.433018 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:03.433394 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:30:03.433446 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:30:03.933058 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:03.933138 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:03.933450 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:04.433250 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:04.433365 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:04.433699 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:04.933501 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:04.933567 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:04.933823 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:05.433624 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:05.433703 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:05.434043 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:30:05.434098 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:30:05.932702 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:05.932780 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:05.933132 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:06.432812 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:06.432885 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:06.433207 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:06.932725 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:06.932803 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:06.933164 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:07.432877 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:07.432965 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:07.433368 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:07.932664 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:07.932739 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:07.933045 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:30:07.933095 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:30:08.432746 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:08.432832 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:08.433233 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:08.932787 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:08.932862 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:08.933220 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:09.432722 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:09.432792 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:09.433075 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:09.932940 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:09.933018 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:09.933383 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:30:09.933446 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:30:10.432697 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:10.432775 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:10.433080 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:10.932767 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:10.932837 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:10.933122 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:11.432711 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:11.432785 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:11.433139 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:11.932864 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:11.932943 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:11.933292 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:12.432972 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:12.433050 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:12.433319 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:30:12.433362 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:30:12.932693 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:12.932770 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:12.933130 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:13.432817 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:13.432891 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:13.433211 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:13.932954 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:13.933023 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:13.933298 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:14.432961 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:14.433039 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:14.433383 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:30:14.433439 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:30:14.933212 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:14.933286 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:14.933615 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:15.433214 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:15.433283 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:15.433537 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:15.933372 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:15.933448 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:15.933750 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:16.433525 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:16.433604 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:16.433977 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:30:16.434106 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:30:16.932772 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:16.932839 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:16.933100 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:17.432712 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:17.432793 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:17.433089 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:17.932769 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:17.932849 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:17.933173 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:18.432930 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:18.432998 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:18.433257 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:18.932950 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:18.933025 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:18.933372 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:30:18.933434 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:30:19.432919 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:19.433009 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:19.433344 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:19.933155 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:19.933227 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:19.933491 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:20.433362 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:20.433448 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:20.433795 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:20.933260 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:20.933344 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:20.933670 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:30:20.933726 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:30:21.433173 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:21.433246 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:21.433511 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:21.933299 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:21.933379 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:21.933716 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:22.433492 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:22.433570 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:22.433867 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:22.933284 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:22.933366 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:22.933654 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:23.433368 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:23.433438 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:23.433760 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:30:23.433812 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:30:23.933593 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:23.933675 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:23.933994 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:24.432661 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:24.432729 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:24.432981 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:24.932865 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:24.932938 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:24.933361 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:25.432685 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:25.432763 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:25.433120 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:25.932767 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:25.932839 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:25.933140 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:30:25.933197 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:30:26.432718 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:26.432796 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:26.433197 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:26.932889 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:26.932990 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:26.933317 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:27.432657 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:27.432727 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:27.433032 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:27.932724 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:27.932798 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:27.933135 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:28.432696 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:28.432772 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:28.433073 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:30:28.433121 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:30:28.932660 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:28.932735 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:28.933045 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:29.432714 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:29.432786 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:29.433158 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:29.932918 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:29.933000 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:29.933354 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:30.432667 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:30.432740 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:30.433039 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:30.932757 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:30.932838 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:30.933183 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:30:30.933239 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:30:31.432899 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:31.432979 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:31.433354 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:31.933050 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:31.933119 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:31.933461 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:32.433235 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:32.433315 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:32.433644 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:32.933442 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:32.933524 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:32.933825 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:30:32.933872 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:30:33.433233 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:33.433304 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:33.433591 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:33.933547 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:33.933627 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:33.933938 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:34.432679 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:34.432763 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:34.433108 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:34.933586 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:34.933660 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:34.933905 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:30:34.933945 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:30:35.432656 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:35.432733 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:35.433079 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:35.932801 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:35.932887 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:35.933268 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:36.432736 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:36.432805 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:36.433059 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:36.932731 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:36.932806 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:36.933156 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:37.432867 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:37.432942 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:37.433311 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:30:37.433368 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:30:37.932650 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:37.932720 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:37.932998 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:38.432684 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:38.432763 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:38.433055 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:38.932741 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:38.932818 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:38.933136 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:39.432679 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:39.432748 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:39.433040 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:39.932790 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:39.932869 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:39.933219 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:30:39.933279 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:30:40.432703 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:40.432777 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:40.433111 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:40.932641 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:40.932707 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:40.932957 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:41.432666 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:41.432744 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:41.433069 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:41.932847 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:41.932929 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:41.933224 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:42.432889 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:42.432958 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:42.433265 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:30:42.433309 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:30:42.932708 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:42.932789 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:42.933126 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:43.432820 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:43.432902 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:43.433230 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:43.933144 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:43.933213 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:43.933465 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:44.433223 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:44.433300 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:44.433652 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:30:44.433704 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:30:44.933589 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:44.933670 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:44.934005 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:45.432690 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:45.432762 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:45.433007 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:45.932747 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:45.932822 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:45.933163 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:46.432880 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:46.432953 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:46.433265 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:46.932662 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:46.932736 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:46.933048 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:30:46.933099 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:30:47.432707 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:47.432797 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:47.433190 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:47.932887 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:47.932971 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:47.933316 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:48.432720 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:48.432792 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:48.433100 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:48.932688 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:48.932768 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:48.933088 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:30:48.933148 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:30:49.432733 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:49.432809 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:49.433125 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:49.933000 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:49.933071 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:49.933338 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:50.433013 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:50.433086 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:50.433573 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:50.933345 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:50.933421 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:50.933709 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:30:50.933750 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:30:51.433232 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:51.433307 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:51.433630 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:51.933396 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:51.933477 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:51.933822 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:52.433445 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:52.433526 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:52.433848 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:52.933226 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:52.933298 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:52.933562 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:53.433320 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:53.433394 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:53.433724 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:30:53.433778 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:30:53.932930 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:53.933016 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:53.933473 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:54.433004 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:54.433155 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:54.433480 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:54.933346 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:54.933427 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:54.933751 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:55.433491 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:55.433571 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:55.433940 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:30:55.434008 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:30:55.933242 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:55.933327 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:55.933662 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:56.433447 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:56.433527 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:56.433865 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:56.933651 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:56.933744 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:56.934082 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:57.432792 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:57.432864 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:57.433162 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:57.932683 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:57.932753 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:57.933114 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:30:57.933173 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:30:58.432860 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:58.432937 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:58.433264 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:58.932676 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:58.932748 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:58.932997 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:59.432729 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:59.432808 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:59.433150 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:30:59.933081 1187425 type.go:168] "Request Body" body=""
	I1209 04:30:59.933159 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:30:59.933480 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:30:59.933530 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:31:00.433233 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:00.433315 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:00.433580 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:00.933316 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:00.933394 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:00.933727 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:01.433533 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:01.433611 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:01.433948 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:01.933228 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:01.933301 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:01.933558 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:31:01.933611 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:31:02.433377 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:02.433451 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:02.433800 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:02.933601 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:02.933680 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:02.933967 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:03.432648 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:03.432726 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:03.432986 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:03.932952 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:03.933038 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:03.933395 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:04.433141 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:04.433218 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:04.433526 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:31:04.433581 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:31:04.933489 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:04.933558 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:04.933807 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:05.433605 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:05.433678 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:05.434011 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:05.932712 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:05.932791 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:05.933127 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:06.432820 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:06.432900 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:06.433220 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:06.932716 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:06.932796 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:06.933135 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:31:06.933230 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:31:07.432675 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:07.432749 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:07.433059 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:07.932657 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:07.932734 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:07.933058 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:08.432730 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:08.432806 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:08.433103 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:08.932714 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:08.932789 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:08.933128 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:09.432655 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:09.432733 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:09.432994 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:31:09.433050 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:31:09.932911 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:09.932991 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:09.933336 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:10.432707 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:10.432787 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:10.433102 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:10.932665 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:10.932738 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:10.933044 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:11.432740 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:11.432823 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:11.433154 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:31:11.433214 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:31:11.932885 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:11.932968 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:11.933325 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:12.432666 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:12.432738 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:12.433048 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:12.932727 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:12.932804 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:12.933136 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:13.432857 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:13.432936 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:13.433268 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:31:13.433318 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:31:13.933237 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:13.933317 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:13.933599 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:14.433349 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:14.433424 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:14.433772 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:14.933659 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:14.933736 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:14.934065 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:15.432670 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:15.432747 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:15.433015 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:15.932715 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:15.932801 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:15.933127 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:31:15.933183 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:31:16.432689 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:16.432770 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:16.433108 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:16.932805 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:16.932881 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:16.933165 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:17.432843 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:17.432921 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:17.433248 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:17.932974 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:17.933055 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:17.933357 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:31:17.933406 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:31:18.432810 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:18.432881 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:18.433142 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:18.932706 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:18.932778 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:18.933130 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:19.432699 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:19.432777 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:19.433122 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:19.932861 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:19.932936 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:19.933225 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:20.432912 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:20.432990 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:20.433312 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:31:20.433361 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:31:20.933001 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:20.933084 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:20.933413 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:21.432656 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:21.432769 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:21.433112 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:21.932708 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:21.932788 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:21.933119 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:22.432682 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:22.432760 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:22.433128 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:22.932684 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:22.932752 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:22.932998 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:31:22.933038 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:31:23.432679 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:23.432761 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:23.433116 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:23.932894 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:23.932973 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:23.933311 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:24.432655 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:24.432728 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:24.432998 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:24.932905 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:24.932983 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:24.933347 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:31:24.933403 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:31:25.432684 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:25.432758 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:25.433091 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:25.932662 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:25.932737 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:25.933053 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:26.432697 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:26.432782 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:26.433111 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:26.932702 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:26.932782 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:26.933089 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:27.432642 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:27.432721 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:27.432985 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:31:27.433025 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:31:27.932736 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:27.932813 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:27.933163 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:28.432736 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:28.432812 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:28.433107 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:28.932662 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:28.932730 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:28.933022 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:29.432735 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:29.432813 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:29.433101 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:31:29.433149 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:31:29.932650 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:29.932724 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:29.933059 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:30.432740 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:30.432807 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:30.433108 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:30.932710 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:30.932784 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:30.933148 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:31.432857 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:31.432937 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:31.433271 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:31:31.433325 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:31:31.932657 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:31.932730 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:31.933052 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:32.432705 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:32.432797 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:32.433154 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:32.932867 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:32.932945 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:32.933298 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:33.433002 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:33.433120 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:33.433453 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:31:33.433504 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:31:33.933313 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:33.933388 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:33.933720 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:34.432992 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:34.433115 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:34.433477 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:34.933299 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:34.933372 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:34.933678 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:35.433472 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:35.433550 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:35.433863 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:31:35.433925 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:31:35.932642 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:35.932726 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:35.933082 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:36.432718 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:36.432804 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:36.433204 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:36.932932 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:36.933006 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:36.933324 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:37.432701 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:37.432781 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:37.433098 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:37.932655 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:37.932744 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:37.933007 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:31:37.933067 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:31:38.432684 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:38.432762 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:38.433096 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:38.932741 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:38.932818 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:38.933151 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:39.432752 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:39.432821 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:39.433106 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:39.933656 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:39.933728 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:39.933989 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:31:39.934033 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:31:40.432680 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:40.432765 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:40.433112 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:40.932669 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:40.932734 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:40.933053 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:41.432690 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:41.432780 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:41.433200 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:41.932877 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:41.932953 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:41.933290 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:42.432904 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:42.432982 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:42.433302 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:31:42.433355 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:31:42.932706 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:42.932798 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:42.933087 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:43.432700 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:43.432776 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:43.433120 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:43.933050 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:43.933118 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:43.933424 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:44.432712 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:44.432790 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:44.433076 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:44.932987 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:44.933069 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:44.933451 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:31:44.933508 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:31:45.432652 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:45.432721 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:45.433020 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:45.932767 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:45.932842 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:45.933175 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:46.432686 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:46.432759 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:46.433102 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:46.932655 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:46.932727 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:46.933006 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:47.432684 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:47.432764 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:47.433090 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:31:47.433151 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:31:47.932730 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:47.932804 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:47.933206 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:48.432734 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:48.432808 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:48.433081 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:48.932704 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:48.932788 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:48.933086 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:49.432676 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:49.432758 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:49.433091 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:49.932841 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:49.932922 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:49.933214 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:31:49.933265 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:31:50.432697 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:50.432774 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:50.433083 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:50.932736 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:50.932814 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:50.933144 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:51.432737 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:51.432809 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:51.433098 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:51.932682 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:51.932765 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:51.933115 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:52.432819 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:52.432896 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:52.433241 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:31:52.433300 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:31:52.932671 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:52.932743 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:52.933011 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:53.432692 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:53.432763 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:53.433098 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:53.933090 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:53.933164 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:53.933488 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:54.432669 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:54.432748 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:54.433065 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:54.932936 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:54.933012 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:54.933364 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:31:54.933419 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:31:55.433079 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:55.433151 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:55.433486 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:55.933226 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:55.933296 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:55.933560 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:56.433428 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:56.433505 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:56.433878 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:56.932635 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:56.932709 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:56.933027 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:57.432658 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:57.432736 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:57.433044 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:31:57.433099 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:31:57.932711 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:57.932795 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:57.933103 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:58.432714 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:58.432787 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:58.433121 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:58.932651 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:58.932719 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:58.932975 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:31:59.432680 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:59.432759 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:59.433101 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:31:59.433157 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:31:59.932861 1187425 type.go:168] "Request Body" body=""
	I1209 04:31:59.932939 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:31:59.933269 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:00.432699 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:00.432786 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:00.433188 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:00.932718 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:00.932793 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:00.933119 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:01.432702 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:01.432778 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:01.433132 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:32:01.433188 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:32:01.932953 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:01.933059 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:01.933405 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:02.433066 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:02.433138 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:02.433476 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:02.933290 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:02.933362 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:02.933678 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:03.433228 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:03.433307 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:03.433557 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:32:03.433604 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:32:03.933531 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:03.933606 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:03.933926 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:04.432634 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:04.432709 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:04.433045 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:04.932774 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:04.932840 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:04.933129 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:05.432832 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:05.432907 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:05.433248 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:05.932726 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:05.932801 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:05.933145 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:32:05.933201 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:32:06.432676 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:06.432754 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:06.433037 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:06.932744 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:06.932823 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:06.933214 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:07.432896 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:07.432968 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:07.433319 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:07.933026 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:07.933110 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:07.933393 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:32:07.933441 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:32:08.432735 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:08.432817 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:08.433284 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:08.932871 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:08.932978 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:08.933325 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:09.432676 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:09.432743 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:09.432980 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:09.932851 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:09.932929 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:09.933264 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:10.432726 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:10.432817 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:10.433167 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:32:10.433217 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:32:10.932660 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:10.932732 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:10.933027 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:11.432714 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:11.432790 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:11.433154 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:11.932874 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:11.932955 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:11.933284 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:12.432656 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:12.432727 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:12.432974 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:12.932659 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:12.932737 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:12.933062 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:32:12.933115 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:32:13.432673 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:13.432745 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:13.433062 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:13.932942 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:13.933022 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:13.933305 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:14.432666 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:14.432737 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:14.433054 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:14.932628 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:14.932702 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:14.933027 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:15.432739 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:15.432819 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:15.433087 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:32:15.433132 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:32:15.932808 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:15.932886 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:15.933232 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:16.432939 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:16.433082 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:16.433415 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:16.933224 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:16.933297 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:16.933611 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:17.433392 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:17.433466 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:17.433806 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:32:17.433861 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:32:17.933475 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:17.933557 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:17.933868 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:18.433222 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:18.433292 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:18.433591 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:18.933254 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:18.933331 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:18.933670 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:19.433471 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:19.433558 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:19.433901 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:32:19.433958 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:32:19.932590 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:19.932659 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:19.932906 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:20.432666 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:20.432742 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:20.433050 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:20.932683 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:20.932763 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:20.933127 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:21.432653 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:21.432736 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:21.433076 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:21.932743 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:21.932826 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:21.933126 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:32:21.933180 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:32:22.432753 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:22.432822 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:22.433257 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:22.932652 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:22.932719 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:22.933033 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:23.432711 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:23.432784 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:23.433133 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:23.933069 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:23.933144 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:23.933485 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:32:23.933542 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:32:24.433150 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:24.433216 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:24.433549 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:24.933587 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:24.933667 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:24.933983 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:25.432703 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:25.432780 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:25.433147 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:25.932823 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:25.932900 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:25.933226 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:26.432706 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:26.432784 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:26.433129 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:32:26.433184 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:32:26.932670 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:26.932744 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:26.933089 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:27.432659 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:27.432738 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:27.433018 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:27.932725 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:27.932802 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:27.933150 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:28.432726 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:28.432802 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:28.433119 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:28.932673 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:28.932744 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:28.933005 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:32:28.933048 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:32:29.432681 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:29.432758 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:29.433106 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:29.932862 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:29.932935 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:29.933263 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:30.432649 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:30.432724 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:30.433048 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:30.932738 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:30.932809 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:30.933151 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:32:30.933209 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:32:31.432726 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:31.432804 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:31.433153 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:31.932709 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:31.932785 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:31.933093 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:32.432864 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:32.432944 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:32.433316 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:32.932716 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:32.932791 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:32.933128 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:33.432801 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:33.432867 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:33.433124 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:32:33.433164 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:32:33.932966 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:33.933040 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:33.933351 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:34.432696 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:34.432771 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:34.433125 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:34.932830 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:34.932901 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:34.933235 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:35.432934 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:35.433024 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:35.433448 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:32:35.433504 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:32:35.933268 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:35.933342 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:35.933709 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:36.433228 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:36.433294 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:36.433588 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:36.933406 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:36.933485 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:36.933802 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:37.433562 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:37.433642 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:37.433939 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:32:37.433983 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:32:37.933183 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:37.933254 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:37.933510 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:38.433295 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:38.433365 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:38.433691 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:38.933541 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:38.933625 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:38.933982 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:39.432665 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:39.432740 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:39.432999 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:39.932625 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:39.932702 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:39.933037 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:32:39.933088 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:32:40.432600 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:40.432680 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:40.432996 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:40.932646 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:40.932715 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:40.933018 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:41.432713 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:41.432789 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:41.433153 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:41.932729 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:41.932806 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:41.933137 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:32:41.933194 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:32:42.432656 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:42.432729 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:42.433054 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:42.932710 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:42.932792 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:42.933135 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:43.432827 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:43.432907 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:43.433251 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:43.932972 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:43.933046 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:43.933297 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:32:43.933337 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:32:44.433057 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:44.433132 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:44.433467 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:44.933349 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:44.933425 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:44.933760 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:45.433200 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:45.433271 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:45.433522 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:45.933329 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:45.933403 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:45.933719 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:32:45.933777 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:32:46.433543 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:46.433636 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:46.433947 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:46.933240 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:46.933306 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:46.933602 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:47.433389 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:47.433467 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:47.433758 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:47.933580 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:47.933664 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:47.934006 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:32:47.934069 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:32:48.432643 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:48.432717 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:48.432979 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:48.932681 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:48.932755 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:48.933070 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:49.432675 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:49.432745 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:49.433131 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:49.933137 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:49.933206 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:49.933501 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:50.433255 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:50.433323 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:50.433610 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:32:50.433656 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:32:50.933294 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:50.933368 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:50.933657 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:51.433200 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:51.433282 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:51.433542 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:51.933307 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:51.933394 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:51.933715 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:52.433471 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:52.433553 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:52.433873 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:32:52.433936 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-667319": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:32:52.933235 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:52.933316 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:52.933571 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:53.433369 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:53.433448 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:53.433777 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:53.933615 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:53.933693 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:53.934065 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:54.432659 1187425 type.go:168] "Request Body" body=""
	I1209 04:32:54.432727 1187425 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-667319" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:32:54.433303 1187425 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:32:54.933397 1187425 type.go:168] "Request Body" body=""
	W1209 04:32:54.933475 1187425 node_ready.go:55] error getting node "functional-667319" condition "Ready" status (will retry): client rate limiter Wait returned an error: context deadline exceeded
	I1209 04:32:54.933495 1187425 node_ready.go:38] duration metric: took 6m0.001016343s for node "functional-667319" to be "Ready" ...
	I1209 04:32:54.936503 1187425 out.go:203] 
	W1209 04:32:54.939246 1187425 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1209 04:32:54.939264 1187425 out.go:285] * 
	W1209 04:32:54.941401 1187425 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 04:32:54.944197 1187425 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 09 04:33:02 functional-667319 containerd[5187]: time="2025-12-09T04:33:02.294041580Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 09 04:33:03 functional-667319 containerd[5187]: time="2025-12-09T04:33:03.355738360Z" level=info msg="No images store for sha256:a1f83055284ec302ac691d8677946d8b4e772fb7071d39ada1cc9184cb70814b"
	Dec 09 04:33:03 functional-667319 containerd[5187]: time="2025-12-09T04:33:03.357839975Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:latest\""
	Dec 09 04:33:03 functional-667319 containerd[5187]: time="2025-12-09T04:33:03.366240053Z" level=info msg="ImageCreate event name:\"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 09 04:33:03 functional-667319 containerd[5187]: time="2025-12-09T04:33:03.366711399Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 09 04:33:04 functional-667319 containerd[5187]: time="2025-12-09T04:33:04.314815745Z" level=info msg="No images store for sha256:a39ec332fe9389ac4cf25eee02b25033c0ceb4d88e27730c4ef90701385b405e"
	Dec 09 04:33:04 functional-667319 containerd[5187]: time="2025-12-09T04:33:04.317047408Z" level=info msg="ImageCreate event name:\"docker.io/library/minikube-local-cache-test:functional-667319\""
	Dec 09 04:33:04 functional-667319 containerd[5187]: time="2025-12-09T04:33:04.324208006Z" level=info msg="ImageCreate event name:\"sha256:f396cc1d2a2f792c8359c58d4cd23fe6d949d3fd4d68a61961f5310e98abe14b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 09 04:33:04 functional-667319 containerd[5187]: time="2025-12-09T04:33:04.324769950Z" level=info msg="ImageUpdate event name:\"docker.io/library/minikube-local-cache-test:functional-667319\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 09 04:33:05 functional-667319 containerd[5187]: time="2025-12-09T04:33:05.115841091Z" level=info msg="RemoveImage \"registry.k8s.io/pause:latest\""
	Dec 09 04:33:05 functional-667319 containerd[5187]: time="2025-12-09T04:33:05.118305576Z" level=info msg="ImageDelete event name:\"registry.k8s.io/pause:latest\""
	Dec 09 04:33:05 functional-667319 containerd[5187]: time="2025-12-09T04:33:05.121327976Z" level=info msg="ImageDelete event name:\"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a\""
	Dec 09 04:33:05 functional-667319 containerd[5187]: time="2025-12-09T04:33:05.131814810Z" level=info msg="RemoveImage \"registry.k8s.io/pause:latest\" returns successfully"
	Dec 09 04:33:06 functional-667319 containerd[5187]: time="2025-12-09T04:33:06.216059891Z" level=info msg="No images store for sha256:a1f83055284ec302ac691d8677946d8b4e772fb7071d39ada1cc9184cb70814b"
	Dec 09 04:33:06 functional-667319 containerd[5187]: time="2025-12-09T04:33:06.218615515Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:latest\""
	Dec 09 04:33:06 functional-667319 containerd[5187]: time="2025-12-09T04:33:06.225791605Z" level=info msg="ImageCreate event name:\"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 09 04:33:06 functional-667319 containerd[5187]: time="2025-12-09T04:33:06.226281937Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 09 04:33:06 functional-667319 containerd[5187]: time="2025-12-09T04:33:06.247323820Z" level=info msg="RemoveImage \"registry.k8s.io/pause:3.1\""
	Dec 09 04:33:06 functional-667319 containerd[5187]: time="2025-12-09T04:33:06.249630672Z" level=info msg="ImageDelete event name:\"registry.k8s.io/pause:3.1\""
	Dec 09 04:33:06 functional-667319 containerd[5187]: time="2025-12-09T04:33:06.251480693Z" level=info msg="ImageDelete event name:\"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5\""
	Dec 09 04:33:06 functional-667319 containerd[5187]: time="2025-12-09T04:33:06.259469246Z" level=info msg="RemoveImage \"registry.k8s.io/pause:3.1\" returns successfully"
	Dec 09 04:33:06 functional-667319 containerd[5187]: time="2025-12-09T04:33:06.401349115Z" level=info msg="No images store for sha256:3ac89611d5efd8eb74174b1f04c33b7e73b651cec35b5498caf0cfdd2efd7d48"
	Dec 09 04:33:06 functional-667319 containerd[5187]: time="2025-12-09T04:33:06.403483780Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.1\""
	Dec 09 04:33:06 functional-667319 containerd[5187]: time="2025-12-09T04:33:06.413748704Z" level=info msg="ImageCreate event name:\"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 09 04:33:06 functional-667319 containerd[5187]: time="2025-12-09T04:33:06.414146190Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:33:10.434622    9331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:33:10.435375    9331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:33:10.436853    9331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:33:10.437233    9331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:33:10.438708    9331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 9 03:13] overlayfs: idmapped layers are currently not supported
	[ +25.904254] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:14] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:16] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:18] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:19] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:30] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:32] overlayfs: idmapped layers are currently not supported
	[ +28.114653] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:33] overlayfs: idmapped layers are currently not supported
	[ +23.720849] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:34] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:35] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:36] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:37] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:38] overlayfs: idmapped layers are currently not supported
	[ +23.656275] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:39] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:57] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:58] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:00] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:02] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:03] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:05] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:06] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 04:33:10 up  7:15,  0 user,  load average: 0.42, 0.29, 0.80
	Linux functional-667319 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 09 04:33:07 functional-667319 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 09 04:33:07 functional-667319 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 826.
	Dec 09 04:33:07 functional-667319 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:33:07 functional-667319 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:33:07 functional-667319 kubelet[9155]: E1209 04:33:07.990606    9155 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 09 04:33:07 functional-667319 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 09 04:33:07 functional-667319 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 09 04:33:08 functional-667319 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 827.
	Dec 09 04:33:08 functional-667319 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:33:08 functional-667319 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:33:08 functional-667319 kubelet[9207]: E1209 04:33:08.751760    9207 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 09 04:33:08 functional-667319 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 09 04:33:08 functional-667319 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 09 04:33:09 functional-667319 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 828.
	Dec 09 04:33:09 functional-667319 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:33:09 functional-667319 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:33:09 functional-667319 kubelet[9235]: E1209 04:33:09.492382    9235 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 09 04:33:09 functional-667319 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 09 04:33:09 functional-667319 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 09 04:33:10 functional-667319 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 829.
	Dec 09 04:33:10 functional-667319 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:33:10 functional-667319 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:33:10 functional-667319 kubelet[9278]: E1209 04:33:10.240723    9278 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 09 04:33:10 functional-667319 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 09 04:33:10 functional-667319 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-667319 -n functional-667319
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-667319 -n functional-667319: exit status 2 (355.442418ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-667319" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (2.21s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (736.43s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-667319 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1209 04:36:06.736193 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/addons-221952/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 04:37:38.984963 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-717497/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 04:39:02.059604 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-717497/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 04:41:06.737771 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/addons-221952/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 04:42:38.986955 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-717497/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-667319 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 109 (12m14.313933546s)

                                                
                                                
-- stdout --
	* [functional-667319] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22081
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22081-1142328/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-1142328/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "functional-667319" primary control-plane node in "functional-667319" cluster
	* Pulling base image v0.0.48-1765184860-22066 ...
	* Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	  - apiserver.enable-admission-plugins=NamespaceAutoProvision
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001187798s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000314916s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000314916s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Related issue: https://github.com/kubernetes/minikube/issues/4172

                                                
                                                
** /stderr **
functional_test.go:774: failed to restart minikube. args "out/minikube-linux-arm64 start -p functional-667319 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 109
functional_test.go:776: restart took 12m14.315197592s for "functional-667319" cluster.
I1209 04:45:25.635675 1144231 config.go:182] Loaded profile config "functional-667319": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-667319
helpers_test.go:243: (dbg) docker inspect functional-667319:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e5b6511799c8d5c445a335a3bd5cc9a61b518fc27ac93dad8800da366ef32129",
	        "Created": "2025-12-09T04:18:34.060957311Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1182075,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-09T04:18:34.126944158Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:e4eb91ed18a24161fce60c7cdd660144ecd5b8c5029dc2dea2c5e423c2f48ce4",
	        "ResolvConfPath": "/var/lib/docker/containers/e5b6511799c8d5c445a335a3bd5cc9a61b518fc27ac93dad8800da366ef32129/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e5b6511799c8d5c445a335a3bd5cc9a61b518fc27ac93dad8800da366ef32129/hostname",
	        "HostsPath": "/var/lib/docker/containers/e5b6511799c8d5c445a335a3bd5cc9a61b518fc27ac93dad8800da366ef32129/hosts",
	        "LogPath": "/var/lib/docker/containers/e5b6511799c8d5c445a335a3bd5cc9a61b518fc27ac93dad8800da366ef32129/e5b6511799c8d5c445a335a3bd5cc9a61b518fc27ac93dad8800da366ef32129-json.log",
	        "Name": "/functional-667319",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-667319:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-667319",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e5b6511799c8d5c445a335a3bd5cc9a61b518fc27ac93dad8800da366ef32129",
	                "LowerDir": "/var/lib/docker/overlay2/b0239006282b6e4609a1f554d0a3fb94c749a13505795c8e4078cb2db194e8e0-init/diff:/var/lib/docker/overlay2/c44bb57aa59cc265266f37f2bb6e7ec0e7d641c3b4aeaa57e6d23deec6f0d1d4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b0239006282b6e4609a1f554d0a3fb94c749a13505795c8e4078cb2db194e8e0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b0239006282b6e4609a1f554d0a3fb94c749a13505795c8e4078cb2db194e8e0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b0239006282b6e4609a1f554d0a3fb94c749a13505795c8e4078cb2db194e8e0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-667319",
	                "Source": "/var/lib/docker/volumes/functional-667319/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-667319",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-667319",
	                "name.minikube.sigs.k8s.io": "functional-667319",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7c81dabcd9e57af9bce0bc0f5619f6ef3a27af43f4b649283a5bd778ab256415",
	            "SandboxKey": "/var/run/docker/netns/7c81dabcd9e5",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33900"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33901"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33904"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33902"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33903"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-667319": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "fe:40:bd:46:56:d8",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "88b3a65de70c15005c532a44219284d4df94e474ca5b78b04514c2f932b03beb",
	                    "EndpointID": "bdef7b156f4a28c1f641ae70b42db2750bb810ae6fe93fd65325e62eb232fe91",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-667319",
	                        "e5b6511799c8"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-667319 -n functional-667319
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-667319 -n functional-667319: exit status 2 (311.016802ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-667319 logs -n 25
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                          ARGS                                                                           │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-717497 image ls --format yaml --alsologtostderr                                                                                              │ functional-717497 │ jenkins │ v1.37.0 │ 09 Dec 25 04:18 UTC │ 09 Dec 25 04:18 UTC │
	│ ssh     │ functional-717497 ssh pgrep buildkitd                                                                                                                   │ functional-717497 │ jenkins │ v1.37.0 │ 09 Dec 25 04:18 UTC │                     │
	│ image   │ functional-717497 image ls --format json --alsologtostderr                                                                                              │ functional-717497 │ jenkins │ v1.37.0 │ 09 Dec 25 04:18 UTC │ 09 Dec 25 04:18 UTC │
	│ image   │ functional-717497 image build -t localhost/my-image:functional-717497 testdata/build --alsologtostderr                                                  │ functional-717497 │ jenkins │ v1.37.0 │ 09 Dec 25 04:18 UTC │ 09 Dec 25 04:18 UTC │
	│ image   │ functional-717497 image ls --format table --alsologtostderr                                                                                             │ functional-717497 │ jenkins │ v1.37.0 │ 09 Dec 25 04:18 UTC │ 09 Dec 25 04:18 UTC │
	│ image   │ functional-717497 image ls                                                                                                                              │ functional-717497 │ jenkins │ v1.37.0 │ 09 Dec 25 04:18 UTC │ 09 Dec 25 04:18 UTC │
	│ delete  │ -p functional-717497                                                                                                                                    │ functional-717497 │ jenkins │ v1.37.0 │ 09 Dec 25 04:18 UTC │ 09 Dec 25 04:18 UTC │
	│ start   │ -p functional-667319 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:18 UTC │                     │
	│ start   │ -p functional-667319 --alsologtostderr -v=8                                                                                                             │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:26 UTC │                     │
	│ cache   │ functional-667319 cache add registry.k8s.io/pause:3.1                                                                                                   │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:32 UTC │ 09 Dec 25 04:33 UTC │
	│ cache   │ functional-667319 cache add registry.k8s.io/pause:3.3                                                                                                   │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:33 UTC │ 09 Dec 25 04:33 UTC │
	│ cache   │ functional-667319 cache add registry.k8s.io/pause:latest                                                                                                │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:33 UTC │ 09 Dec 25 04:33 UTC │
	│ cache   │ functional-667319 cache add minikube-local-cache-test:functional-667319                                                                                 │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:33 UTC │ 09 Dec 25 04:33 UTC │
	│ cache   │ functional-667319 cache delete minikube-local-cache-test:functional-667319                                                                              │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:33 UTC │ 09 Dec 25 04:33 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 09 Dec 25 04:33 UTC │ 09 Dec 25 04:33 UTC │
	│ cache   │ list                                                                                                                                                    │ minikube          │ jenkins │ v1.37.0 │ 09 Dec 25 04:33 UTC │ 09 Dec 25 04:33 UTC │
	│ ssh     │ functional-667319 ssh sudo crictl images                                                                                                                │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:33 UTC │ 09 Dec 25 04:33 UTC │
	│ ssh     │ functional-667319 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                      │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:33 UTC │ 09 Dec 25 04:33 UTC │
	│ ssh     │ functional-667319 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                 │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:33 UTC │                     │
	│ cache   │ functional-667319 cache reload                                                                                                                          │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:33 UTC │ 09 Dec 25 04:33 UTC │
	│ ssh     │ functional-667319 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                 │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:33 UTC │ 09 Dec 25 04:33 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 09 Dec 25 04:33 UTC │ 09 Dec 25 04:33 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                                     │ minikube          │ jenkins │ v1.37.0 │ 09 Dec 25 04:33 UTC │ 09 Dec 25 04:33 UTC │
	│ kubectl │ functional-667319 kubectl -- --context functional-667319 get pods                                                                                       │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:33 UTC │                     │
	│ start   │ -p functional-667319 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                                                │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:33 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/09 04:33:11
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 04:33:11.365325 1193189 out.go:360] Setting OutFile to fd 1 ...
	I1209 04:33:11.365424 1193189 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 04:33:11.365428 1193189 out.go:374] Setting ErrFile to fd 2...
	I1209 04:33:11.365431 1193189 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 04:33:11.365670 1193189 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-1142328/.minikube/bin
	I1209 04:33:11.366033 1193189 out.go:368] Setting JSON to false
	I1209 04:33:11.366848 1193189 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":26115,"bootTime":1765228677,"procs":156,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1209 04:33:11.366902 1193189 start.go:143] virtualization:  
	I1209 04:33:11.370321 1193189 out.go:179] * [functional-667319] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1209 04:33:11.373998 1193189 out.go:179]   - MINIKUBE_LOCATION=22081
	I1209 04:33:11.374082 1193189 notify.go:221] Checking for updates...
	I1209 04:33:11.379822 1193189 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 04:33:11.382611 1193189 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22081-1142328/kubeconfig
	I1209 04:33:11.385432 1193189 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-1142328/.minikube
	I1209 04:33:11.388728 1193189 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1209 04:33:11.391441 1193189 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 04:33:11.394813 1193189 config.go:182] Loaded profile config "functional-667319": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1209 04:33:11.394910 1193189 driver.go:422] Setting default libvirt URI to qemu:///system
	I1209 04:33:11.422551 1193189 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1209 04:33:11.422654 1193189 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 04:33:11.481358 1193189 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-09 04:33:11.472506561 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1209 04:33:11.481459 1193189 docker.go:319] overlay module found
	I1209 04:33:11.484471 1193189 out.go:179] * Using the docker driver based on existing profile
	I1209 04:33:11.487406 1193189 start.go:309] selected driver: docker
	I1209 04:33:11.487427 1193189 start.go:927] validating driver "docker" against &{Name:functional-667319 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-667319 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disa
bleCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 04:33:11.487512 1193189 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 04:33:11.487612 1193189 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 04:33:11.542290 1193189 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-09 04:33:11.533632532 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1209 04:33:11.542703 1193189 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 04:33:11.542726 1193189 cni.go:84] Creating CNI manager for ""
	I1209 04:33:11.542784 1193189 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1209 04:33:11.542826 1193189 start.go:353] cluster config:
	{Name:functional-667319 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-667319 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 04:33:11.546045 1193189 out.go:179] * Starting "functional-667319" primary control-plane node in "functional-667319" cluster
	I1209 04:33:11.548925 1193189 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1209 04:33:11.551638 1193189 out.go:179] * Pulling base image v0.0.48-1765184860-22066 ...
	I1209 04:33:11.554609 1193189 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1209 04:33:11.554645 1193189 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1209 04:33:11.554670 1193189 cache.go:65] Caching tarball of preloaded images
	I1209 04:33:11.554693 1193189 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local docker daemon
	I1209 04:33:11.554756 1193189 preload.go:238] Found /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1209 04:33:11.554765 1193189 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1209 04:33:11.554868 1193189 profile.go:143] Saving config to /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/config.json ...
	I1209 04:33:11.573683 1193189 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local docker daemon, skipping pull
	I1209 04:33:11.573695 1193189 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c exists in daemon, skipping load
	I1209 04:33:11.573713 1193189 cache.go:243] Successfully downloaded all kic artifacts
	I1209 04:33:11.573740 1193189 start.go:360] acquireMachinesLock for functional-667319: {Name:mk6c31f0747796f5f8ac8ea1653d6ee60fe2a47d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 04:33:11.573797 1193189 start.go:364] duration metric: took 42.739µs to acquireMachinesLock for "functional-667319"
	I1209 04:33:11.573815 1193189 start.go:96] Skipping create...Using existing machine configuration
	I1209 04:33:11.573819 1193189 fix.go:54] fixHost starting: 
	I1209 04:33:11.574074 1193189 cli_runner.go:164] Run: docker container inspect functional-667319 --format={{.State.Status}}
	I1209 04:33:11.589947 1193189 fix.go:112] recreateIfNeeded on functional-667319: state=Running err=<nil>
	W1209 04:33:11.589973 1193189 fix.go:138] unexpected machine state, will restart: <nil>
	I1209 04:33:11.593148 1193189 out.go:252] * Updating the running docker "functional-667319" container ...
	I1209 04:33:11.593168 1193189 machine.go:94] provisionDockerMachine start ...
	I1209 04:33:11.593256 1193189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-667319
	I1209 04:33:11.609392 1193189 main.go:143] libmachine: Using SSH client type: native
	I1209 04:33:11.609722 1193189 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 33900 <nil> <nil>}
	I1209 04:33:11.609729 1193189 main.go:143] libmachine: About to run SSH command:
	hostname
	I1209 04:33:11.759408 1193189 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-667319
	
	I1209 04:33:11.759422 1193189 ubuntu.go:182] provisioning hostname "functional-667319"
	I1209 04:33:11.759483 1193189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-667319
	I1209 04:33:11.776859 1193189 main.go:143] libmachine: Using SSH client type: native
	I1209 04:33:11.777189 1193189 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 33900 <nil> <nil>}
	I1209 04:33:11.777198 1193189 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-667319 && echo "functional-667319" | sudo tee /etc/hostname
	I1209 04:33:11.939211 1193189 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-667319
	
	I1209 04:33:11.939295 1193189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-667319
	I1209 04:33:11.957143 1193189 main.go:143] libmachine: Using SSH client type: native
	I1209 04:33:11.957494 1193189 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 33900 <nil> <nil>}
	I1209 04:33:11.957508 1193189 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-667319' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-667319/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-667319' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 04:33:12.113237 1193189 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1209 04:33:12.113254 1193189 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22081-1142328/.minikube CaCertPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22081-1142328/.minikube}
	I1209 04:33:12.113278 1193189 ubuntu.go:190] setting up certificates
	I1209 04:33:12.113294 1193189 provision.go:84] configureAuth start
	I1209 04:33:12.113362 1193189 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-667319
	I1209 04:33:12.130912 1193189 provision.go:143] copyHostCerts
	I1209 04:33:12.131003 1193189 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.pem, removing ...
	I1209 04:33:12.131010 1193189 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.pem
	I1209 04:33:12.131086 1193189 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.pem (1078 bytes)
	I1209 04:33:12.131177 1193189 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-1142328/.minikube/cert.pem, removing ...
	I1209 04:33:12.131181 1193189 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-1142328/.minikube/cert.pem
	I1209 04:33:12.131205 1193189 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22081-1142328/.minikube/cert.pem (1123 bytes)
	I1209 04:33:12.131250 1193189 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-1142328/.minikube/key.pem, removing ...
	I1209 04:33:12.131254 1193189 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-1142328/.minikube/key.pem
	I1209 04:33:12.131276 1193189 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22081-1142328/.minikube/key.pem (1675 bytes)
	I1209 04:33:12.131318 1193189 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca-key.pem org=jenkins.functional-667319 san=[127.0.0.1 192.168.49.2 functional-667319 localhost minikube]
	I1209 04:33:12.827484 1193189 provision.go:177] copyRemoteCerts
	I1209 04:33:12.827535 1193189 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 04:33:12.827573 1193189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-667319
	I1209 04:33:12.846654 1193189 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/functional-667319/id_rsa Username:docker}
	I1209 04:33:12.951639 1193189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1209 04:33:12.968320 1193189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1209 04:33:12.985745 1193189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1209 04:33:13.004711 1193189 provision.go:87] duration metric: took 891.395644ms to configureAuth
	I1209 04:33:13.004730 1193189 ubuntu.go:206] setting minikube options for container-runtime
	I1209 04:33:13.005000 1193189 config.go:182] Loaded profile config "functional-667319": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1209 04:33:13.005006 1193189 machine.go:97] duration metric: took 1.411833664s to provisionDockerMachine
	I1209 04:33:13.005012 1193189 start.go:293] postStartSetup for "functional-667319" (driver="docker")
	I1209 04:33:13.005022 1193189 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 04:33:13.005072 1193189 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 04:33:13.005108 1193189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-667319
	I1209 04:33:13.023376 1193189 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/functional-667319/id_rsa Username:docker}
	I1209 04:33:13.128032 1193189 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 04:33:13.131471 1193189 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1209 04:33:13.131490 1193189 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1209 04:33:13.131500 1193189 filesync.go:126] Scanning /home/jenkins/minikube-integration/22081-1142328/.minikube/addons for local assets ...
	I1209 04:33:13.131552 1193189 filesync.go:126] Scanning /home/jenkins/minikube-integration/22081-1142328/.minikube/files for local assets ...
	I1209 04:33:13.131625 1193189 filesync.go:149] local asset: /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/ssl/certs/11442312.pem -> 11442312.pem in /etc/ssl/certs
	I1209 04:33:13.131701 1193189 filesync.go:149] local asset: /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/test/nested/copy/1144231/hosts -> hosts in /etc/test/nested/copy/1144231
	I1209 04:33:13.131749 1193189 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1144231
	I1209 04:33:13.139091 1193189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/ssl/certs/11442312.pem --> /etc/ssl/certs/11442312.pem (1708 bytes)
	I1209 04:33:13.156114 1193189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/test/nested/copy/1144231/hosts --> /etc/test/nested/copy/1144231/hosts (40 bytes)
	I1209 04:33:13.173744 1193189 start.go:296] duration metric: took 168.716821ms for postStartSetup
	I1209 04:33:13.173816 1193189 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1209 04:33:13.173854 1193189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-667319
	I1209 04:33:13.198555 1193189 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/functional-667319/id_rsa Username:docker}
	I1209 04:33:13.300903 1193189 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1209 04:33:13.305102 1193189 fix.go:56] duration metric: took 1.731276319s for fixHost
	I1209 04:33:13.305116 1193189 start.go:83] releasing machines lock for "functional-667319", held for 1.731312428s
	I1209 04:33:13.305216 1193189 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-667319
	I1209 04:33:13.322301 1193189 ssh_runner.go:195] Run: cat /version.json
	I1209 04:33:13.322356 1193189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-667319
	I1209 04:33:13.322602 1193189 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 04:33:13.322654 1193189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-667319
	I1209 04:33:13.345854 1193189 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/functional-667319/id_rsa Username:docker}
	I1209 04:33:13.346808 1193189 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/functional-667319/id_rsa Username:docker}
	I1209 04:33:13.447601 1193189 ssh_runner.go:195] Run: systemctl --version
	I1209 04:33:13.537710 1193189 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 04:33:13.542181 1193189 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 04:33:13.542253 1193189 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 04:33:13.550371 1193189 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1209 04:33:13.550385 1193189 start.go:496] detecting cgroup driver to use...
	I1209 04:33:13.550417 1193189 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1209 04:33:13.550479 1193189 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1209 04:33:13.565987 1193189 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1209 04:33:13.579220 1193189 docker.go:218] disabling cri-docker service (if available) ...
	I1209 04:33:13.579279 1193189 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 04:33:13.594632 1193189 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 04:33:13.607810 1193189 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 04:33:13.745867 1193189 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 04:33:13.855372 1193189 docker.go:234] disabling docker service ...
	I1209 04:33:13.855434 1193189 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 04:33:13.878271 1193189 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 04:33:13.891442 1193189 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 04:33:14.014618 1193189 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 04:33:14.144235 1193189 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 04:33:14.157713 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 04:33:14.171634 1193189 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1209 04:33:14.180595 1193189 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1209 04:33:14.189855 1193189 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1209 04:33:14.189928 1193189 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1209 04:33:14.198663 1193189 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1209 04:33:14.207241 1193189 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1209 04:33:14.215864 1193189 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1209 04:33:14.224572 1193189 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 04:33:14.232585 1193189 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1209 04:33:14.241204 1193189 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1209 04:33:14.249919 1193189 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1209 04:33:14.258812 1193189 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 04:33:14.266241 1193189 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 04:33:14.273587 1193189 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 04:33:14.393428 1193189 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1209 04:33:14.528665 1193189 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1209 04:33:14.528726 1193189 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1209 04:33:14.532955 1193189 start.go:564] Will wait 60s for crictl version
	I1209 04:33:14.533056 1193189 ssh_runner.go:195] Run: which crictl
	I1209 04:33:14.541891 1193189 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1209 04:33:14.570282 1193189 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1209 04:33:14.570350 1193189 ssh_runner.go:195] Run: containerd --version
	I1209 04:33:14.592081 1193189 ssh_runner.go:195] Run: containerd --version
	I1209 04:33:14.617312 1193189 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1209 04:33:14.620294 1193189 cli_runner.go:164] Run: docker network inspect functional-667319 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1209 04:33:14.636105 1193189 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1209 04:33:14.643286 1193189 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1209 04:33:14.646097 1193189 kubeadm.go:884] updating cluster {Name:functional-667319 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-667319 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bina
ryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 04:33:14.646234 1193189 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1209 04:33:14.646312 1193189 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 04:33:14.671604 1193189 containerd.go:627] all images are preloaded for containerd runtime.
	I1209 04:33:14.671615 1193189 containerd.go:534] Images already preloaded, skipping extraction
	I1209 04:33:14.671676 1193189 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 04:33:14.702360 1193189 containerd.go:627] all images are preloaded for containerd runtime.
	I1209 04:33:14.702371 1193189 cache_images.go:86] Images are preloaded, skipping loading
	I1209 04:33:14.702376 1193189 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
	I1209 04:33:14.702482 1193189 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-667319 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-667319 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 04:33:14.702549 1193189 ssh_runner.go:195] Run: sudo crictl info
	I1209 04:33:14.731154 1193189 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1209 04:33:14.731172 1193189 cni.go:84] Creating CNI manager for ""
	I1209 04:33:14.731179 1193189 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1209 04:33:14.731190 1193189 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1209 04:33:14.731212 1193189 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-667319 NodeName:functional-667319 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false Kubel
etConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1209 04:33:14.731316 1193189 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-667319"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 04:33:14.731385 1193189 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1209 04:33:14.742794 1193189 binaries.go:51] Found k8s binaries, skipping transfer
	I1209 04:33:14.742854 1193189 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1209 04:33:14.750345 1193189 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1209 04:33:14.763345 1193189 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1209 04:33:14.775780 1193189 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2087 bytes)
	I1209 04:33:14.788798 1193189 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1209 04:33:14.792560 1193189 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 04:33:14.907792 1193189 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 04:33:15.431459 1193189 certs.go:69] Setting up /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319 for IP: 192.168.49.2
	I1209 04:33:15.431470 1193189 certs.go:195] generating shared ca certs ...
	I1209 04:33:15.431485 1193189 certs.go:227] acquiring lock for ca certs: {Name:mk15788702f8c4e23b5aeab3f44961d296fab259 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 04:33:15.431654 1193189 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.key
	I1209 04:33:15.431695 1193189 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/proxy-client-ca.key
	I1209 04:33:15.431701 1193189 certs.go:257] generating profile certs ...
	I1209 04:33:15.431782 1193189 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/client.key
	I1209 04:33:15.431840 1193189 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/apiserver.key.c80eb595
	I1209 04:33:15.431875 1193189 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/proxy-client.key
	I1209 04:33:15.431982 1193189 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/1144231.pem (1338 bytes)
	W1209 04:33:15.432037 1193189 certs.go:480] ignoring /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/1144231_empty.pem, impossibly tiny 0 bytes
	I1209 04:33:15.432046 1193189 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca-key.pem (1679 bytes)
	I1209 04:33:15.432075 1193189 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem (1078 bytes)
	I1209 04:33:15.432099 1193189 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/cert.pem (1123 bytes)
	I1209 04:33:15.432147 1193189 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/key.pem (1675 bytes)
	I1209 04:33:15.432195 1193189 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/ssl/certs/11442312.pem (1708 bytes)
	I1209 04:33:15.432796 1193189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 04:33:15.450868 1193189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1209 04:33:15.469951 1193189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 04:33:15.488029 1193189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1209 04:33:15.507676 1193189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1209 04:33:15.528269 1193189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1209 04:33:15.547354 1193189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 04:33:15.565510 1193189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1209 04:33:15.583378 1193189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/ssl/certs/11442312.pem --> /usr/share/ca-certificates/11442312.pem (1708 bytes)
	I1209 04:33:15.601546 1193189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 04:33:15.619028 1193189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/1144231.pem --> /usr/share/ca-certificates/1144231.pem (1338 bytes)
	I1209 04:33:15.636618 1193189 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 04:33:15.649310 1193189 ssh_runner.go:195] Run: openssl version
	I1209 04:33:15.655222 1193189 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/11442312.pem
	I1209 04:33:15.662530 1193189 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/11442312.pem /etc/ssl/certs/11442312.pem
	I1209 04:33:15.670168 1193189 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11442312.pem
	I1209 04:33:15.673829 1193189 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 04:18 /usr/share/ca-certificates/11442312.pem
	I1209 04:33:15.673881 1193189 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11442312.pem
	I1209 04:33:15.715756 1193189 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1209 04:33:15.723175 1193189 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1209 04:33:15.730584 1193189 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1209 04:33:15.738232 1193189 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 04:33:15.742081 1193189 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 04:09 /usr/share/ca-certificates/minikubeCA.pem
	I1209 04:33:15.742141 1193189 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 04:33:15.786133 1193189 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1209 04:33:15.793720 1193189 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1144231.pem
	I1209 04:33:15.801263 1193189 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1144231.pem /etc/ssl/certs/1144231.pem
	I1209 04:33:15.808357 1193189 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1144231.pem
	I1209 04:33:15.812098 1193189 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 04:18 /usr/share/ca-certificates/1144231.pem
	I1209 04:33:15.812149 1193189 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1144231.pem
	I1209 04:33:15.854297 1193189 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1209 04:33:15.861740 1193189 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 04:33:15.865303 1193189 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1209 04:33:15.905838 1193189 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1209 04:33:15.946617 1193189 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1209 04:33:15.987357 1193189 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1209 04:33:16.032170 1193189 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1209 04:33:16.075134 1193189 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1209 04:33:16.116540 1193189 kubeadm.go:401] StartCluster: {Name:functional-667319 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-667319 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 04:33:16.116615 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1209 04:33:16.116676 1193189 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 04:33:16.141721 1193189 cri.go:89] found id: ""
	I1209 04:33:16.141780 1193189 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1209 04:33:16.149204 1193189 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1209 04:33:16.149214 1193189 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1209 04:33:16.149263 1193189 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1209 04:33:16.156279 1193189 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1209 04:33:16.156783 1193189 kubeconfig.go:125] found "functional-667319" server: "https://192.168.49.2:8441"
	I1209 04:33:16.159840 1193189 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1209 04:33:16.167426 1193189 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-09 04:18:41.945308258 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-09 04:33:14.782796805 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1209 04:33:16.167445 1193189 kubeadm.go:1161] stopping kube-system containers ...
	I1209 04:33:16.167459 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I1209 04:33:16.167517 1193189 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 04:33:16.201963 1193189 cri.go:89] found id: ""
	I1209 04:33:16.202024 1193189 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1209 04:33:16.219973 1193189 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 04:33:16.227472 1193189 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5635 Dec  9 04:22 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Dec  9 04:22 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5672 Dec  9 04:22 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5588 Dec  9 04:22 /etc/kubernetes/scheduler.conf
	
	I1209 04:33:16.227532 1193189 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1209 04:33:16.234796 1193189 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1209 04:33:16.241862 1193189 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1209 04:33:16.241916 1193189 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 04:33:16.249083 1193189 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1209 04:33:16.256206 1193189 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1209 04:33:16.256261 1193189 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 04:33:16.263352 1193189 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1209 04:33:16.270362 1193189 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1209 04:33:16.270416 1193189 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 04:33:16.277706 1193189 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 04:33:16.285107 1193189 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 04:33:16.327899 1193189 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 04:33:17.810490 1193189 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.482563431s)
	I1209 04:33:17.810548 1193189 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1209 04:33:18.017563 1193189 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 04:33:18.086202 1193189 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1209 04:33:18.134715 1193189 api_server.go:52] waiting for apiserver process to appear ...
	I1209 04:33:18.134785 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:18.635261 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:19.135782 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:19.634982 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:20.134970 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:20.634979 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:21.134982 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:21.634901 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:22.135638 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:22.635624 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:23.134983 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:23.634978 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:24.135473 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:24.634966 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:25.135742 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:25.635347 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:26.134954 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:26.635380 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:27.134976 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:27.635752 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:28.135296 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:28.634924 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:29.134984 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:29.635367 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:30.135822 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:30.635721 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:31.135397 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:31.635633 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:32.134956 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:32.634993 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:33.135921 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:33.635624 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:34.134951 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:34.635593 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:35.134950 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:35.634961 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:36.134953 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:36.634877 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:37.135675 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:37.634982 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:38.135060 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:38.635809 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:39.135591 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:39.634959 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:40.135841 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:40.635611 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:41.135199 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:41.635170 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:42.134924 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:42.634948 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:43.135679 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:43.635637 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:44.134963 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:44.634963 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:45.135229 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:45.635702 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:46.134937 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:46.634881 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:47.135215 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:47.634980 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:48.134999 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:48.635744 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:49.135351 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:49.634915 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:50.135024 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:50.634852 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:51.134961 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:51.635396 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:52.135636 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:52.635513 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:53.135240 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:53.634952 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:54.135504 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:54.634869 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:55.135747 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:55.635267 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:56.135830 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:56.635547 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:57.134988 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:57.635506 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:58.135689 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:58.634992 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:59.135820 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:59.635373 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:00.135881 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:00.634984 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:01.135667 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:01.635758 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:02.135376 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:02.635880 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:03.135850 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:03.635021 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:04.135603 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:04.634975 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:05.135311 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:05.635291 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:06.135867 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:06.635018 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:07.135547 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:07.634967 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:08.134945 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:08.634950 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:09.135735 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:09.635308 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:10.135291 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:10.635185 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:11.134976 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:11.635433 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:12.134976 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:12.634985 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:13.134972 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:13.634991 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:14.135750 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:14.635398 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:15.135547 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:15.635003 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:16.135840 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:16.635833 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:17.135311 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:17.635902 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:18.135877 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:34:18.135980 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:34:18.160417 1193189 cri.go:89] found id: ""
	I1209 04:34:18.160431 1193189 logs.go:282] 0 containers: []
	W1209 04:34:18.160438 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:34:18.160442 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:34:18.160499 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:34:18.186014 1193189 cri.go:89] found id: ""
	I1209 04:34:18.186028 1193189 logs.go:282] 0 containers: []
	W1209 04:34:18.186035 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:34:18.186040 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:34:18.186102 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:34:18.209963 1193189 cri.go:89] found id: ""
	I1209 04:34:18.209977 1193189 logs.go:282] 0 containers: []
	W1209 04:34:18.209983 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:34:18.209989 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:34:18.210048 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:34:18.234704 1193189 cri.go:89] found id: ""
	I1209 04:34:18.234723 1193189 logs.go:282] 0 containers: []
	W1209 04:34:18.234730 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:34:18.234737 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:34:18.234794 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:34:18.260085 1193189 cri.go:89] found id: ""
	I1209 04:34:18.260100 1193189 logs.go:282] 0 containers: []
	W1209 04:34:18.260107 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:34:18.260112 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:34:18.260170 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:34:18.284959 1193189 cri.go:89] found id: ""
	I1209 04:34:18.284972 1193189 logs.go:282] 0 containers: []
	W1209 04:34:18.284978 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:34:18.284983 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:34:18.285040 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:34:18.313883 1193189 cri.go:89] found id: ""
	I1209 04:34:18.313898 1193189 logs.go:282] 0 containers: []
	W1209 04:34:18.313905 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:34:18.313912 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:34:18.313923 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:34:18.330120 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:34:18.330138 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:34:18.391936 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:34:18.383205   10728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:18.383825   10728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:18.385661   10728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:18.386372   10728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:18.388209   10728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:34:18.383205   10728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:18.383825   10728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:18.385661   10728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:18.386372   10728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:18.388209   10728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:34:18.391947 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:34:18.391957 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:34:18.457339 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:34:18.457361 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:34:18.484687 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:34:18.484702 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:34:21.045358 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:21.056486 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:34:21.056551 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:34:21.085673 1193189 cri.go:89] found id: ""
	I1209 04:34:21.085687 1193189 logs.go:282] 0 containers: []
	W1209 04:34:21.085693 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:34:21.085699 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:34:21.085758 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:34:21.111043 1193189 cri.go:89] found id: ""
	I1209 04:34:21.111056 1193189 logs.go:282] 0 containers: []
	W1209 04:34:21.111063 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:34:21.111068 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:34:21.111128 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:34:21.137031 1193189 cri.go:89] found id: ""
	I1209 04:34:21.137044 1193189 logs.go:282] 0 containers: []
	W1209 04:34:21.137051 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:34:21.137057 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:34:21.137118 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:34:21.161998 1193189 cri.go:89] found id: ""
	I1209 04:34:21.162012 1193189 logs.go:282] 0 containers: []
	W1209 04:34:21.162019 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:34:21.162024 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:34:21.162088 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:34:21.185710 1193189 cri.go:89] found id: ""
	I1209 04:34:21.185733 1193189 logs.go:282] 0 containers: []
	W1209 04:34:21.185740 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:34:21.185745 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:34:21.185805 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:34:21.209921 1193189 cri.go:89] found id: ""
	I1209 04:34:21.209934 1193189 logs.go:282] 0 containers: []
	W1209 04:34:21.209941 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:34:21.209946 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:34:21.210007 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:34:21.237263 1193189 cri.go:89] found id: ""
	I1209 04:34:21.237277 1193189 logs.go:282] 0 containers: []
	W1209 04:34:21.237284 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:34:21.237291 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:34:21.237302 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:34:21.253947 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:34:21.253964 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:34:21.323683 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:34:21.314716   10833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:21.315370   10833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:21.316976   10833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:21.317539   10833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:21.318471   10833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:34:21.314716   10833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:21.315370   10833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:21.316976   10833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:21.317539   10833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:21.318471   10833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:34:21.323693 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:34:21.323704 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:34:21.385947 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:34:21.385968 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:34:21.414692 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:34:21.414709 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:34:23.972329 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:23.982273 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:34:23.982333 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:34:24.008968 1193189 cri.go:89] found id: ""
	I1209 04:34:24.008983 1193189 logs.go:282] 0 containers: []
	W1209 04:34:24.008997 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:34:24.009002 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:34:24.009067 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:34:24.035053 1193189 cri.go:89] found id: ""
	I1209 04:34:24.035067 1193189 logs.go:282] 0 containers: []
	W1209 04:34:24.035074 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:34:24.035082 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:34:24.035155 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:34:24.060177 1193189 cri.go:89] found id: ""
	I1209 04:34:24.060202 1193189 logs.go:282] 0 containers: []
	W1209 04:34:24.060210 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:34:24.060215 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:34:24.060278 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:34:24.087352 1193189 cri.go:89] found id: ""
	I1209 04:34:24.087365 1193189 logs.go:282] 0 containers: []
	W1209 04:34:24.087372 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:34:24.087377 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:34:24.087436 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:34:24.112436 1193189 cri.go:89] found id: ""
	I1209 04:34:24.112450 1193189 logs.go:282] 0 containers: []
	W1209 04:34:24.112457 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:34:24.112463 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:34:24.112523 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:34:24.138043 1193189 cri.go:89] found id: ""
	I1209 04:34:24.138057 1193189 logs.go:282] 0 containers: []
	W1209 04:34:24.138063 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:34:24.138068 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:34:24.138127 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:34:24.162473 1193189 cri.go:89] found id: ""
	I1209 04:34:24.162486 1193189 logs.go:282] 0 containers: []
	W1209 04:34:24.162493 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:34:24.162501 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:34:24.162512 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:34:24.218725 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:34:24.218750 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:34:24.237014 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:34:24.237032 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:34:24.301761 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:34:24.293159   10942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:24.293842   10942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:24.295579   10942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:24.296219   10942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:24.297932   10942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:34:24.293159   10942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:24.293842   10942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:24.295579   10942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:24.296219   10942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:24.297932   10942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:34:24.301771 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:34:24.301782 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:34:24.364794 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:34:24.364819 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:34:26.896098 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:26.905998 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:34:26.906059 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:34:26.937370 1193189 cri.go:89] found id: ""
	I1209 04:34:26.937384 1193189 logs.go:282] 0 containers: []
	W1209 04:34:26.937390 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:34:26.937395 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:34:26.937455 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:34:26.961993 1193189 cri.go:89] found id: ""
	I1209 04:34:26.962006 1193189 logs.go:282] 0 containers: []
	W1209 04:34:26.962013 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:34:26.962018 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:34:26.962075 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:34:26.991456 1193189 cri.go:89] found id: ""
	I1209 04:34:26.991470 1193189 logs.go:282] 0 containers: []
	W1209 04:34:26.991476 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:34:26.991495 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:34:26.991554 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:34:27.018891 1193189 cri.go:89] found id: ""
	I1209 04:34:27.018904 1193189 logs.go:282] 0 containers: []
	W1209 04:34:27.018911 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:34:27.018916 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:34:27.018974 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:34:27.043050 1193189 cri.go:89] found id: ""
	I1209 04:34:27.043064 1193189 logs.go:282] 0 containers: []
	W1209 04:34:27.043070 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:34:27.043083 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:34:27.043141 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:34:27.069538 1193189 cri.go:89] found id: ""
	I1209 04:34:27.069553 1193189 logs.go:282] 0 containers: []
	W1209 04:34:27.069559 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:34:27.069564 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:34:27.069624 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:34:27.092560 1193189 cri.go:89] found id: ""
	I1209 04:34:27.092573 1193189 logs.go:282] 0 containers: []
	W1209 04:34:27.092580 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:34:27.092588 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:34:27.092597 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:34:27.149471 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:34:27.149509 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:34:27.166396 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:34:27.166413 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:34:27.233147 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:34:27.224772   11045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:27.225484   11045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:27.227169   11045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:27.227648   11045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:27.229172   11045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:34:27.224772   11045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:27.225484   11045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:27.227169   11045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:27.227648   11045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:27.229172   11045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:34:27.233160 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:34:27.233171 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:34:27.300582 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:34:27.300607 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:34:29.831076 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:29.841031 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:34:29.841110 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:34:29.870054 1193189 cri.go:89] found id: ""
	I1209 04:34:29.870068 1193189 logs.go:282] 0 containers: []
	W1209 04:34:29.870074 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:34:29.870080 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:34:29.870148 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:34:29.893884 1193189 cri.go:89] found id: ""
	I1209 04:34:29.893897 1193189 logs.go:282] 0 containers: []
	W1209 04:34:29.893904 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:34:29.893909 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:34:29.893984 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:34:29.917545 1193189 cri.go:89] found id: ""
	I1209 04:34:29.917559 1193189 logs.go:282] 0 containers: []
	W1209 04:34:29.917565 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:34:29.917570 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:34:29.917636 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:34:29.948707 1193189 cri.go:89] found id: ""
	I1209 04:34:29.948721 1193189 logs.go:282] 0 containers: []
	W1209 04:34:29.948727 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:34:29.948733 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:34:29.948792 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:34:29.988977 1193189 cri.go:89] found id: ""
	I1209 04:34:29.988990 1193189 logs.go:282] 0 containers: []
	W1209 04:34:29.988997 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:34:29.989003 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:34:29.989058 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:34:30.029614 1193189 cri.go:89] found id: ""
	I1209 04:34:30.029653 1193189 logs.go:282] 0 containers: []
	W1209 04:34:30.029660 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:34:30.029666 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:34:30.029747 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:34:30.057862 1193189 cri.go:89] found id: ""
	I1209 04:34:30.057877 1193189 logs.go:282] 0 containers: []
	W1209 04:34:30.057884 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:34:30.057892 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:34:30.057903 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:34:30.125643 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:34:30.125665 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:34:30.154365 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:34:30.154393 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:34:30.218342 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:34:30.218370 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:34:30.235415 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:34:30.235438 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:34:30.300328 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:34:30.292511   11158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:30.293159   11158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:30.294635   11158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:30.295041   11158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:30.296473   11158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:34:30.292511   11158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:30.293159   11158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:30.294635   11158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:30.295041   11158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:30.296473   11158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:34:32.800607 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:32.810690 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:34:32.810752 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:34:32.837030 1193189 cri.go:89] found id: ""
	I1209 04:34:32.837045 1193189 logs.go:282] 0 containers: []
	W1209 04:34:32.837052 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:34:32.837058 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:34:32.837136 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:34:32.863207 1193189 cri.go:89] found id: ""
	I1209 04:34:32.863221 1193189 logs.go:282] 0 containers: []
	W1209 04:34:32.863227 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:34:32.863242 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:34:32.863302 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:34:32.888280 1193189 cri.go:89] found id: ""
	I1209 04:34:32.888294 1193189 logs.go:282] 0 containers: []
	W1209 04:34:32.888301 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:34:32.888306 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:34:32.888365 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:34:32.912361 1193189 cri.go:89] found id: ""
	I1209 04:34:32.912375 1193189 logs.go:282] 0 containers: []
	W1209 04:34:32.912381 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:34:32.912387 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:34:32.912447 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:34:32.944341 1193189 cri.go:89] found id: ""
	I1209 04:34:32.944355 1193189 logs.go:282] 0 containers: []
	W1209 04:34:32.944363 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:34:32.944368 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:34:32.944427 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:34:32.974577 1193189 cri.go:89] found id: ""
	I1209 04:34:32.974592 1193189 logs.go:282] 0 containers: []
	W1209 04:34:32.974599 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:34:32.974604 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:34:32.974667 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:34:33.007167 1193189 cri.go:89] found id: ""
	I1209 04:34:33.007182 1193189 logs.go:282] 0 containers: []
	W1209 04:34:33.007188 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:34:33.007197 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:34:33.007208 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:34:33.072653 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:34:33.064421   11240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:33.065259   11240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:33.066881   11240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:33.067179   11240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:33.068654   11240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:34:33.064421   11240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:33.065259   11240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:33.066881   11240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:33.067179   11240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:33.068654   11240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:34:33.072662 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:34:33.072674 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:34:33.135053 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:34:33.135075 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:34:33.166357 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:34:33.166374 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:34:33.223824 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:34:33.223844 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:34:35.741231 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:35.751318 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:34:35.751378 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:34:35.776735 1193189 cri.go:89] found id: ""
	I1209 04:34:35.776749 1193189 logs.go:282] 0 containers: []
	W1209 04:34:35.776755 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:34:35.776760 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:34:35.776825 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:34:35.805165 1193189 cri.go:89] found id: ""
	I1209 04:34:35.805178 1193189 logs.go:282] 0 containers: []
	W1209 04:34:35.805185 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:34:35.805190 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:34:35.805255 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:34:35.834579 1193189 cri.go:89] found id: ""
	I1209 04:34:35.834592 1193189 logs.go:282] 0 containers: []
	W1209 04:34:35.834599 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:34:35.834604 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:34:35.834668 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:34:35.864666 1193189 cri.go:89] found id: ""
	I1209 04:34:35.864680 1193189 logs.go:282] 0 containers: []
	W1209 04:34:35.864687 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:34:35.864692 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:34:35.864753 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:34:35.888987 1193189 cri.go:89] found id: ""
	I1209 04:34:35.889001 1193189 logs.go:282] 0 containers: []
	W1209 04:34:35.889008 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:34:35.889013 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:34:35.889073 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:34:35.913760 1193189 cri.go:89] found id: ""
	I1209 04:34:35.913774 1193189 logs.go:282] 0 containers: []
	W1209 04:34:35.913781 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:34:35.913787 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:34:35.913848 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:34:35.953491 1193189 cri.go:89] found id: ""
	I1209 04:34:35.953504 1193189 logs.go:282] 0 containers: []
	W1209 04:34:35.953511 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:34:35.953519 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:34:35.953529 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:34:36.017926 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:34:36.017947 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:34:36.036525 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:34:36.036542 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:34:36.100279 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:34:36.091110   11352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:36.091738   11352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:36.093351   11352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:36.093993   11352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:36.095716   11352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:34:36.091110   11352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:36.091738   11352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:36.093351   11352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:36.093993   11352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:36.095716   11352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:34:36.100289 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:34:36.100302 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:34:36.165176 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:34:36.165198 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:34:38.692274 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:38.702150 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:34:38.702209 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:34:38.727703 1193189 cri.go:89] found id: ""
	I1209 04:34:38.727718 1193189 logs.go:282] 0 containers: []
	W1209 04:34:38.727725 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:34:38.727739 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:34:38.727802 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:34:38.752490 1193189 cri.go:89] found id: ""
	I1209 04:34:38.752509 1193189 logs.go:282] 0 containers: []
	W1209 04:34:38.752515 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:34:38.752521 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:34:38.752582 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:34:38.776648 1193189 cri.go:89] found id: ""
	I1209 04:34:38.776662 1193189 logs.go:282] 0 containers: []
	W1209 04:34:38.776668 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:34:38.776676 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:34:38.776735 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:34:38.801762 1193189 cri.go:89] found id: ""
	I1209 04:34:38.801775 1193189 logs.go:282] 0 containers: []
	W1209 04:34:38.801782 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:34:38.801788 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:34:38.801849 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:34:38.825649 1193189 cri.go:89] found id: ""
	I1209 04:34:38.825662 1193189 logs.go:282] 0 containers: []
	W1209 04:34:38.825668 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:34:38.825673 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:34:38.825734 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:34:38.850253 1193189 cri.go:89] found id: ""
	I1209 04:34:38.850268 1193189 logs.go:282] 0 containers: []
	W1209 04:34:38.850274 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:34:38.850280 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:34:38.850342 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:34:38.878018 1193189 cri.go:89] found id: ""
	I1209 04:34:38.878032 1193189 logs.go:282] 0 containers: []
	W1209 04:34:38.878039 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:34:38.878046 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:34:38.878056 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:34:38.937715 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:34:38.937734 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:34:38.956265 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:34:38.956289 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:34:39.027118 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:34:39.019252   11455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:39.020066   11455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:39.021610   11455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:39.021907   11455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:39.023382   11455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:34:39.019252   11455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:39.020066   11455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:39.021610   11455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:39.021907   11455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:39.023382   11455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:34:39.027128 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:34:39.027140 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:34:39.093921 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:34:39.093942 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:34:41.623796 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:41.634102 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:34:41.634167 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:34:41.661702 1193189 cri.go:89] found id: ""
	I1209 04:34:41.661716 1193189 logs.go:282] 0 containers: []
	W1209 04:34:41.661723 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:34:41.661728 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:34:41.661793 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:34:41.686941 1193189 cri.go:89] found id: ""
	I1209 04:34:41.686955 1193189 logs.go:282] 0 containers: []
	W1209 04:34:41.686962 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:34:41.686967 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:34:41.687026 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:34:41.716790 1193189 cri.go:89] found id: ""
	I1209 04:34:41.716805 1193189 logs.go:282] 0 containers: []
	W1209 04:34:41.716813 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:34:41.716818 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:34:41.716881 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:34:41.741120 1193189 cri.go:89] found id: ""
	I1209 04:34:41.741135 1193189 logs.go:282] 0 containers: []
	W1209 04:34:41.741141 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:34:41.741147 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:34:41.741206 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:34:41.765600 1193189 cri.go:89] found id: ""
	I1209 04:34:41.765614 1193189 logs.go:282] 0 containers: []
	W1209 04:34:41.765622 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:34:41.765627 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:34:41.765687 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:34:41.789956 1193189 cri.go:89] found id: ""
	I1209 04:34:41.789971 1193189 logs.go:282] 0 containers: []
	W1209 04:34:41.789978 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:34:41.789983 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:34:41.790047 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:34:41.813854 1193189 cri.go:89] found id: ""
	I1209 04:34:41.813868 1193189 logs.go:282] 0 containers: []
	W1209 04:34:41.813875 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:34:41.813883 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:34:41.813893 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:34:41.869283 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:34:41.869303 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:34:41.886263 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:34:41.886279 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:34:41.966783 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:34:41.957901   11556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:41.958580   11556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:41.960469   11556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:41.961191   11556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:41.962837   11556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:34:41.957901   11556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:41.958580   11556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:41.960469   11556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:41.961191   11556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:41.962837   11556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:34:41.966793 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:34:41.966810 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:34:42.035421 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:34:42.035443 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:34:44.567350 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:44.577592 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:34:44.577656 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:34:44.607032 1193189 cri.go:89] found id: ""
	I1209 04:34:44.607047 1193189 logs.go:282] 0 containers: []
	W1209 04:34:44.607054 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:34:44.607059 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:34:44.607119 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:34:44.632031 1193189 cri.go:89] found id: ""
	I1209 04:34:44.632045 1193189 logs.go:282] 0 containers: []
	W1209 04:34:44.632052 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:34:44.632057 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:34:44.632116 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:34:44.656224 1193189 cri.go:89] found id: ""
	I1209 04:34:44.656237 1193189 logs.go:282] 0 containers: []
	W1209 04:34:44.656244 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:34:44.656249 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:34:44.656308 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:34:44.680302 1193189 cri.go:89] found id: ""
	I1209 04:34:44.680317 1193189 logs.go:282] 0 containers: []
	W1209 04:34:44.680323 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:34:44.680329 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:34:44.680389 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:34:44.705286 1193189 cri.go:89] found id: ""
	I1209 04:34:44.705301 1193189 logs.go:282] 0 containers: []
	W1209 04:34:44.705308 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:34:44.705319 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:34:44.705380 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:34:44.729365 1193189 cri.go:89] found id: ""
	I1209 04:34:44.729378 1193189 logs.go:282] 0 containers: []
	W1209 04:34:44.729385 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:34:44.729391 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:34:44.729452 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:34:44.753588 1193189 cri.go:89] found id: ""
	I1209 04:34:44.753601 1193189 logs.go:282] 0 containers: []
	W1209 04:34:44.753608 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:34:44.753616 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:34:44.753626 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:34:44.809786 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:34:44.809806 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:34:44.827005 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:34:44.827023 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:34:44.888308 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:34:44.880071   11658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:44.880850   11658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:44.882536   11658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:44.882961   11658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:44.884478   11658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:34:44.880071   11658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:44.880850   11658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:44.882536   11658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:44.882961   11658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:44.884478   11658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:34:44.888318 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:34:44.888329 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:34:44.955975 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:34:44.955994 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:34:47.492101 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:47.502461 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:34:47.502521 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:34:47.527075 1193189 cri.go:89] found id: ""
	I1209 04:34:47.527089 1193189 logs.go:282] 0 containers: []
	W1209 04:34:47.527095 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:34:47.527109 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:34:47.527168 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:34:47.552346 1193189 cri.go:89] found id: ""
	I1209 04:34:47.552361 1193189 logs.go:282] 0 containers: []
	W1209 04:34:47.552368 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:34:47.552372 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:34:47.552439 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:34:47.577991 1193189 cri.go:89] found id: ""
	I1209 04:34:47.578005 1193189 logs.go:282] 0 containers: []
	W1209 04:34:47.578011 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:34:47.578017 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:34:47.578077 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:34:47.601711 1193189 cri.go:89] found id: ""
	I1209 04:34:47.601726 1193189 logs.go:282] 0 containers: []
	W1209 04:34:47.601733 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:34:47.601738 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:34:47.601799 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:34:47.626261 1193189 cri.go:89] found id: ""
	I1209 04:34:47.626274 1193189 logs.go:282] 0 containers: []
	W1209 04:34:47.626281 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:34:47.626287 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:34:47.626346 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:34:47.650195 1193189 cri.go:89] found id: ""
	I1209 04:34:47.650209 1193189 logs.go:282] 0 containers: []
	W1209 04:34:47.650215 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:34:47.650222 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:34:47.650289 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:34:47.674818 1193189 cri.go:89] found id: ""
	I1209 04:34:47.674844 1193189 logs.go:282] 0 containers: []
	W1209 04:34:47.674851 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:34:47.674858 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:34:47.674868 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:34:47.730669 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:34:47.730689 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:34:47.747530 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:34:47.747553 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:34:47.809873 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:34:47.800913   11766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:47.801626   11766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:47.803387   11766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:47.804067   11766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:47.805583   11766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:34:47.800913   11766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:47.801626   11766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:47.803387   11766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:47.804067   11766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:47.805583   11766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:34:47.809893 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:34:47.809905 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:34:47.871413 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:34:47.871433 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:34:50.398661 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:50.408687 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:34:50.408759 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:34:50.432488 1193189 cri.go:89] found id: ""
	I1209 04:34:50.432507 1193189 logs.go:282] 0 containers: []
	W1209 04:34:50.432514 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:34:50.432520 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:34:50.432581 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:34:50.456531 1193189 cri.go:89] found id: ""
	I1209 04:34:50.456545 1193189 logs.go:282] 0 containers: []
	W1209 04:34:50.456552 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:34:50.456557 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:34:50.456617 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:34:50.484856 1193189 cri.go:89] found id: ""
	I1209 04:34:50.484871 1193189 logs.go:282] 0 containers: []
	W1209 04:34:50.484878 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:34:50.484884 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:34:50.484946 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:34:50.510277 1193189 cri.go:89] found id: ""
	I1209 04:34:50.510291 1193189 logs.go:282] 0 containers: []
	W1209 04:34:50.510297 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:34:50.510302 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:34:50.510361 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:34:50.533718 1193189 cri.go:89] found id: ""
	I1209 04:34:50.533744 1193189 logs.go:282] 0 containers: []
	W1209 04:34:50.533751 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:34:50.533756 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:34:50.533823 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:34:50.556925 1193189 cri.go:89] found id: ""
	I1209 04:34:50.556939 1193189 logs.go:282] 0 containers: []
	W1209 04:34:50.556945 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:34:50.556951 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:34:50.557010 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:34:50.581553 1193189 cri.go:89] found id: ""
	I1209 04:34:50.581567 1193189 logs.go:282] 0 containers: []
	W1209 04:34:50.581574 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:34:50.581582 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:34:50.581592 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:34:50.640077 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:34:50.640096 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:34:50.657419 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:34:50.657435 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:34:50.717755 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:34:50.710080   11873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:50.710723   11873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:50.711869   11873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:50.712446   11873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:50.713899   11873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:34:50.710080   11873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:50.710723   11873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:50.711869   11873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:50.712446   11873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:50.713899   11873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:34:50.717765 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:34:50.717775 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:34:50.784823 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:34:50.784842 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:34:53.324166 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:53.333904 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:34:53.333963 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:34:53.357773 1193189 cri.go:89] found id: ""
	I1209 04:34:53.357787 1193189 logs.go:282] 0 containers: []
	W1209 04:34:53.357794 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:34:53.357799 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:34:53.357869 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:34:53.381476 1193189 cri.go:89] found id: ""
	I1209 04:34:53.381490 1193189 logs.go:282] 0 containers: []
	W1209 04:34:53.381498 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:34:53.381504 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:34:53.381563 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:34:53.404639 1193189 cri.go:89] found id: ""
	I1209 04:34:53.404653 1193189 logs.go:282] 0 containers: []
	W1209 04:34:53.404671 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:34:53.404677 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:34:53.404737 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:34:53.428572 1193189 cri.go:89] found id: ""
	I1209 04:34:53.428586 1193189 logs.go:282] 0 containers: []
	W1209 04:34:53.428593 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:34:53.428598 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:34:53.428656 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:34:53.453240 1193189 cri.go:89] found id: ""
	I1209 04:34:53.453254 1193189 logs.go:282] 0 containers: []
	W1209 04:34:53.453261 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:34:53.453266 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:34:53.453325 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:34:53.478715 1193189 cri.go:89] found id: ""
	I1209 04:34:53.478728 1193189 logs.go:282] 0 containers: []
	W1209 04:34:53.478735 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:34:53.478740 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:34:53.478798 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:34:53.503483 1193189 cri.go:89] found id: ""
	I1209 04:34:53.503497 1193189 logs.go:282] 0 containers: []
	W1209 04:34:53.503503 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:34:53.503511 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:34:53.503522 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:34:53.569898 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:34:53.560949   11970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:53.561857   11970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:53.563361   11970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:53.563947   11970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:53.565706   11970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:34:53.560949   11970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:53.561857   11970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:53.563361   11970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:53.563947   11970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:53.565706   11970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:34:53.569907 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:34:53.569918 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:34:53.631345 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:34:53.631366 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:34:53.657935 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:34:53.657951 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:34:53.717129 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:34:53.717148 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:34:56.235149 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:56.245451 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:34:56.245512 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:34:56.273858 1193189 cri.go:89] found id: ""
	I1209 04:34:56.273872 1193189 logs.go:282] 0 containers: []
	W1209 04:34:56.273879 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:34:56.273884 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:34:56.273946 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:34:56.299990 1193189 cri.go:89] found id: ""
	I1209 04:34:56.300004 1193189 logs.go:282] 0 containers: []
	W1209 04:34:56.300036 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:34:56.300042 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:34:56.300109 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:34:56.325952 1193189 cri.go:89] found id: ""
	I1209 04:34:56.325965 1193189 logs.go:282] 0 containers: []
	W1209 04:34:56.325972 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:34:56.325977 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:34:56.326044 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:34:56.349999 1193189 cri.go:89] found id: ""
	I1209 04:34:56.350013 1193189 logs.go:282] 0 containers: []
	W1209 04:34:56.350020 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:34:56.350025 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:34:56.350088 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:34:56.376083 1193189 cri.go:89] found id: ""
	I1209 04:34:56.376097 1193189 logs.go:282] 0 containers: []
	W1209 04:34:56.376104 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:34:56.376109 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:34:56.376177 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:34:56.400259 1193189 cri.go:89] found id: ""
	I1209 04:34:56.400273 1193189 logs.go:282] 0 containers: []
	W1209 04:34:56.400280 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:34:56.400293 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:34:56.400352 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:34:56.424757 1193189 cri.go:89] found id: ""
	I1209 04:34:56.424777 1193189 logs.go:282] 0 containers: []
	W1209 04:34:56.424784 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:34:56.424792 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:34:56.424802 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:34:56.453832 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:34:56.453849 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:34:56.512444 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:34:56.512463 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:34:56.531303 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:34:56.531322 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:34:56.595582 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:34:56.587456   12094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:56.588255   12094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:56.589902   12094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:56.590193   12094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:56.591722   12094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:34:56.587456   12094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:56.588255   12094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:56.589902   12094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:56.590193   12094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:56.591722   12094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:34:56.595592 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:34:56.595602 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:34:59.163281 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:59.173117 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:34:59.173176 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:34:59.206232 1193189 cri.go:89] found id: ""
	I1209 04:34:59.206246 1193189 logs.go:282] 0 containers: []
	W1209 04:34:59.206253 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:34:59.206257 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:34:59.206321 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:34:59.239889 1193189 cri.go:89] found id: ""
	I1209 04:34:59.239903 1193189 logs.go:282] 0 containers: []
	W1209 04:34:59.239910 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:34:59.239915 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:34:59.239977 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:34:59.268932 1193189 cri.go:89] found id: ""
	I1209 04:34:59.268946 1193189 logs.go:282] 0 containers: []
	W1209 04:34:59.268953 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:34:59.268958 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:34:59.269019 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:34:59.293191 1193189 cri.go:89] found id: ""
	I1209 04:34:59.293205 1193189 logs.go:282] 0 containers: []
	W1209 04:34:59.293211 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:34:59.293217 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:34:59.293279 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:34:59.317923 1193189 cri.go:89] found id: ""
	I1209 04:34:59.317936 1193189 logs.go:282] 0 containers: []
	W1209 04:34:59.317943 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:34:59.317948 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:34:59.318009 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:34:59.342336 1193189 cri.go:89] found id: ""
	I1209 04:34:59.342350 1193189 logs.go:282] 0 containers: []
	W1209 04:34:59.342356 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:34:59.342361 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:34:59.342419 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:34:59.366502 1193189 cri.go:89] found id: ""
	I1209 04:34:59.366517 1193189 logs.go:282] 0 containers: []
	W1209 04:34:59.366524 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:34:59.366532 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:34:59.366542 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:34:59.422133 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:34:59.422153 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:34:59.439160 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:34:59.439187 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:34:59.506261 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:34:59.497371   12189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:59.498039   12189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:59.499661   12189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:59.500189   12189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:59.501847   12189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:34:59.497371   12189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:59.498039   12189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:59.499661   12189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:59.500189   12189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:59.501847   12189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:34:59.506271 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:34:59.506282 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:34:59.575415 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:34:59.575436 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:35:02.103491 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:35:02.113633 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:35:02.113694 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:35:02.144619 1193189 cri.go:89] found id: ""
	I1209 04:35:02.144633 1193189 logs.go:282] 0 containers: []
	W1209 04:35:02.144640 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:35:02.144646 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:35:02.144705 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:35:02.170344 1193189 cri.go:89] found id: ""
	I1209 04:35:02.170361 1193189 logs.go:282] 0 containers: []
	W1209 04:35:02.170368 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:35:02.170373 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:35:02.170433 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:35:02.197667 1193189 cri.go:89] found id: ""
	I1209 04:35:02.197691 1193189 logs.go:282] 0 containers: []
	W1209 04:35:02.197699 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:35:02.197704 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:35:02.197776 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:35:02.234579 1193189 cri.go:89] found id: ""
	I1209 04:35:02.234593 1193189 logs.go:282] 0 containers: []
	W1209 04:35:02.234600 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:35:02.234605 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:35:02.234676 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:35:02.261734 1193189 cri.go:89] found id: ""
	I1209 04:35:02.261750 1193189 logs.go:282] 0 containers: []
	W1209 04:35:02.261757 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:35:02.261763 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:35:02.261840 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:35:02.287117 1193189 cri.go:89] found id: ""
	I1209 04:35:02.287132 1193189 logs.go:282] 0 containers: []
	W1209 04:35:02.287149 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:35:02.287155 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:35:02.287215 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:35:02.316821 1193189 cri.go:89] found id: ""
	I1209 04:35:02.316841 1193189 logs.go:282] 0 containers: []
	W1209 04:35:02.316887 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:35:02.316894 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:35:02.316908 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:35:02.374344 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:35:02.374364 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:35:02.391657 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:35:02.391675 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:35:02.456609 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:35:02.448842   12295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:02.449370   12295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:02.450865   12295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:02.451343   12295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:02.452897   12295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:35:02.448842   12295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:02.449370   12295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:02.450865   12295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:02.451343   12295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:02.452897   12295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:35:02.456619 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:35:02.456630 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:35:02.522522 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:35:02.522544 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:35:05.052204 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:35:05.062711 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:35:05.062783 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:35:05.088683 1193189 cri.go:89] found id: ""
	I1209 04:35:05.088699 1193189 logs.go:282] 0 containers: []
	W1209 04:35:05.088708 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:35:05.088714 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:35:05.088786 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:35:05.114558 1193189 cri.go:89] found id: ""
	I1209 04:35:05.114573 1193189 logs.go:282] 0 containers: []
	W1209 04:35:05.114580 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:35:05.114585 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:35:05.114647 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:35:05.139679 1193189 cri.go:89] found id: ""
	I1209 04:35:05.139694 1193189 logs.go:282] 0 containers: []
	W1209 04:35:05.139701 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:35:05.139713 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:35:05.139785 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:35:05.165102 1193189 cri.go:89] found id: ""
	I1209 04:35:05.165116 1193189 logs.go:282] 0 containers: []
	W1209 04:35:05.165123 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:35:05.165129 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:35:05.165200 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:35:05.193330 1193189 cri.go:89] found id: ""
	I1209 04:35:05.193354 1193189 logs.go:282] 0 containers: []
	W1209 04:35:05.193361 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:35:05.193366 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:35:05.193434 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:35:05.225572 1193189 cri.go:89] found id: ""
	I1209 04:35:05.225602 1193189 logs.go:282] 0 containers: []
	W1209 04:35:05.225610 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:35:05.225615 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:35:05.225684 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:35:05.253111 1193189 cri.go:89] found id: ""
	I1209 04:35:05.253125 1193189 logs.go:282] 0 containers: []
	W1209 04:35:05.253134 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:35:05.253142 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:35:05.253151 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:35:05.311870 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:35:05.311891 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:35:05.329165 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:35:05.329181 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:35:05.403755 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:35:05.395160   12402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:05.395840   12402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:05.397743   12402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:05.398247   12402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:05.399758   12402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:35:05.395160   12402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:05.395840   12402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:05.397743   12402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:05.398247   12402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:05.399758   12402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:35:05.403765 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:35:05.403778 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:35:05.466140 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:35:05.466163 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:35:08.001482 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:35:08.012555 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:35:08.012621 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:35:08.038489 1193189 cri.go:89] found id: ""
	I1209 04:35:08.038502 1193189 logs.go:282] 0 containers: []
	W1209 04:35:08.038510 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:35:08.038515 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:35:08.038577 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:35:08.063791 1193189 cri.go:89] found id: ""
	I1209 04:35:08.063806 1193189 logs.go:282] 0 containers: []
	W1209 04:35:08.063813 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:35:08.063819 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:35:08.063883 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:35:08.088918 1193189 cri.go:89] found id: ""
	I1209 04:35:08.088933 1193189 logs.go:282] 0 containers: []
	W1209 04:35:08.088940 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:35:08.088945 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:35:08.089006 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:35:08.113601 1193189 cri.go:89] found id: ""
	I1209 04:35:08.113614 1193189 logs.go:282] 0 containers: []
	W1209 04:35:08.113623 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:35:08.113628 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:35:08.113684 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:35:08.136899 1193189 cri.go:89] found id: ""
	I1209 04:35:08.136912 1193189 logs.go:282] 0 containers: []
	W1209 04:35:08.136924 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:35:08.136929 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:35:08.136988 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:35:08.160001 1193189 cri.go:89] found id: ""
	I1209 04:35:08.160050 1193189 logs.go:282] 0 containers: []
	W1209 04:35:08.160057 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:35:08.160062 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:35:08.160119 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:35:08.193362 1193189 cri.go:89] found id: ""
	I1209 04:35:08.193375 1193189 logs.go:282] 0 containers: []
	W1209 04:35:08.193382 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:35:08.193390 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:35:08.193400 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:35:08.255924 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:35:08.255942 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:35:08.274860 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:35:08.274876 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:35:08.341852 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:35:08.333782   12503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:08.334529   12503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:08.336277   12503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:08.336676   12503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:08.338082   12503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:35:08.333782   12503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:08.334529   12503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:08.336277   12503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:08.336676   12503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:08.338082   12503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:35:08.341863 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:35:08.341875 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:35:08.402199 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:35:08.402217 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:35:10.929478 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:35:10.939723 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:35:10.939784 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:35:10.964690 1193189 cri.go:89] found id: ""
	I1209 04:35:10.964704 1193189 logs.go:282] 0 containers: []
	W1209 04:35:10.964711 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:35:10.964716 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:35:10.964796 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:35:10.993239 1193189 cri.go:89] found id: ""
	I1209 04:35:10.993253 1193189 logs.go:282] 0 containers: []
	W1209 04:35:10.993260 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:35:10.993265 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:35:10.993323 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:35:11.019779 1193189 cri.go:89] found id: ""
	I1209 04:35:11.019793 1193189 logs.go:282] 0 containers: []
	W1209 04:35:11.019800 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:35:11.019805 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:35:11.019867 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:35:11.044082 1193189 cri.go:89] found id: ""
	I1209 04:35:11.044095 1193189 logs.go:282] 0 containers: []
	W1209 04:35:11.044104 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:35:11.044109 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:35:11.044170 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:35:11.067732 1193189 cri.go:89] found id: ""
	I1209 04:35:11.067746 1193189 logs.go:282] 0 containers: []
	W1209 04:35:11.067753 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:35:11.067758 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:35:11.067827 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:35:11.094131 1193189 cri.go:89] found id: ""
	I1209 04:35:11.094145 1193189 logs.go:282] 0 containers: []
	W1209 04:35:11.094152 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:35:11.094157 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:35:11.094217 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:35:11.120246 1193189 cri.go:89] found id: ""
	I1209 04:35:11.120261 1193189 logs.go:282] 0 containers: []
	W1209 04:35:11.120269 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:35:11.120277 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:35:11.120288 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:35:11.188699 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:35:11.188719 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:35:11.220249 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:35:11.220272 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:35:11.281813 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:35:11.281834 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:35:11.299608 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:35:11.299624 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:35:11.364974 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:35:11.356357   12622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:11.357044   12622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:11.358808   12622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:11.359431   12622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:11.361160   12622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:35:11.356357   12622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:11.357044   12622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:11.358808   12622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:11.359431   12622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:11.361160   12622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:35:13.865252 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:35:13.875906 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:35:13.875966 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:35:13.901925 1193189 cri.go:89] found id: ""
	I1209 04:35:13.901941 1193189 logs.go:282] 0 containers: []
	W1209 04:35:13.901947 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:35:13.901953 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:35:13.902023 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:35:13.929808 1193189 cri.go:89] found id: ""
	I1209 04:35:13.929823 1193189 logs.go:282] 0 containers: []
	W1209 04:35:13.929830 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:35:13.929835 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:35:13.929896 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:35:13.955030 1193189 cri.go:89] found id: ""
	I1209 04:35:13.955045 1193189 logs.go:282] 0 containers: []
	W1209 04:35:13.955051 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:35:13.955056 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:35:13.955114 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:35:13.979829 1193189 cri.go:89] found id: ""
	I1209 04:35:13.979843 1193189 logs.go:282] 0 containers: []
	W1209 04:35:13.979849 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:35:13.979854 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:35:13.979918 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:35:14.007254 1193189 cri.go:89] found id: ""
	I1209 04:35:14.007269 1193189 logs.go:282] 0 containers: []
	W1209 04:35:14.007275 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:35:14.007281 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:35:14.007345 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:35:14.032915 1193189 cri.go:89] found id: ""
	I1209 04:35:14.032929 1193189 logs.go:282] 0 containers: []
	W1209 04:35:14.032936 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:35:14.032941 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:35:14.032999 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:35:14.061801 1193189 cri.go:89] found id: ""
	I1209 04:35:14.061826 1193189 logs.go:282] 0 containers: []
	W1209 04:35:14.061834 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:35:14.061842 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:35:14.061853 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:35:14.125545 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:35:14.117510   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:14.118249   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:14.119815   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:14.120178   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:14.121732   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:35:14.117510   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:14.118249   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:14.119815   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:14.120178   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:14.121732   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:35:14.125555 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:35:14.125569 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:35:14.192586 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:35:14.192605 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:35:14.223400 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:35:14.223417 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:35:14.284525 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:35:14.284545 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:35:16.802913 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:35:16.812669 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:35:16.812730 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:35:16.836304 1193189 cri.go:89] found id: ""
	I1209 04:35:16.836318 1193189 logs.go:282] 0 containers: []
	W1209 04:35:16.836324 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:35:16.836329 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:35:16.836386 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:35:16.861382 1193189 cri.go:89] found id: ""
	I1209 04:35:16.861396 1193189 logs.go:282] 0 containers: []
	W1209 04:35:16.861403 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:35:16.861407 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:35:16.861467 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:35:16.884827 1193189 cri.go:89] found id: ""
	I1209 04:35:16.884841 1193189 logs.go:282] 0 containers: []
	W1209 04:35:16.884848 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:35:16.884853 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:35:16.884913 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:35:16.907933 1193189 cri.go:89] found id: ""
	I1209 04:35:16.907946 1193189 logs.go:282] 0 containers: []
	W1209 04:35:16.907953 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:35:16.907959 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:35:16.908028 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:35:16.933329 1193189 cri.go:89] found id: ""
	I1209 04:35:16.933344 1193189 logs.go:282] 0 containers: []
	W1209 04:35:16.933350 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:35:16.933355 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:35:16.933418 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:35:16.957725 1193189 cri.go:89] found id: ""
	I1209 04:35:16.957739 1193189 logs.go:282] 0 containers: []
	W1209 04:35:16.957745 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:35:16.957751 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:35:16.957807 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:35:16.981209 1193189 cri.go:89] found id: ""
	I1209 04:35:16.981223 1193189 logs.go:282] 0 containers: []
	W1209 04:35:16.981231 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:35:16.981240 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:35:16.981249 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:35:17.039472 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:35:17.039491 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:35:17.056497 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:35:17.056514 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:35:17.119231 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:35:17.111277   12811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:17.111948   12811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:17.113585   12811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:17.114023   12811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:17.115511   12811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:35:17.111277   12811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:17.111948   12811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:17.113585   12811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:17.114023   12811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:17.115511   12811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:35:17.119240 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:35:17.119251 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:35:17.181494 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:35:17.181513 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:35:19.709396 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:35:19.719323 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:35:19.719388 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:35:19.743245 1193189 cri.go:89] found id: ""
	I1209 04:35:19.743259 1193189 logs.go:282] 0 containers: []
	W1209 04:35:19.743266 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:35:19.743271 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:35:19.743328 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:35:19.767566 1193189 cri.go:89] found id: ""
	I1209 04:35:19.767581 1193189 logs.go:282] 0 containers: []
	W1209 04:35:19.767587 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:35:19.767592 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:35:19.767649 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:35:19.797227 1193189 cri.go:89] found id: ""
	I1209 04:35:19.797241 1193189 logs.go:282] 0 containers: []
	W1209 04:35:19.797248 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:35:19.797253 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:35:19.797311 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:35:19.820451 1193189 cri.go:89] found id: ""
	I1209 04:35:19.820465 1193189 logs.go:282] 0 containers: []
	W1209 04:35:19.820471 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:35:19.820477 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:35:19.820534 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:35:19.844577 1193189 cri.go:89] found id: ""
	I1209 04:35:19.844591 1193189 logs.go:282] 0 containers: []
	W1209 04:35:19.844597 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:35:19.844603 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:35:19.844661 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:35:19.868336 1193189 cri.go:89] found id: ""
	I1209 04:35:19.868350 1193189 logs.go:282] 0 containers: []
	W1209 04:35:19.868356 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:35:19.868362 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:35:19.868430 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:35:19.893016 1193189 cri.go:89] found id: ""
	I1209 04:35:19.893030 1193189 logs.go:282] 0 containers: []
	W1209 04:35:19.893037 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:35:19.893045 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:35:19.893055 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:35:19.947540 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:35:19.947561 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:35:19.964623 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:35:19.964640 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:35:20.041799 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:35:20.033487   12917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:20.034256   12917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:20.035804   12917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:20.036289   12917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:20.037849   12917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:35:20.033487   12917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:20.034256   12917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:20.035804   12917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:20.036289   12917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:20.037849   12917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:35:20.041809 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:35:20.041829 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:35:20.106338 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:35:20.106361 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:35:22.634358 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:35:22.644145 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:35:22.644208 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:35:22.670154 1193189 cri.go:89] found id: ""
	I1209 04:35:22.670171 1193189 logs.go:282] 0 containers: []
	W1209 04:35:22.670178 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:35:22.670189 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:35:22.670255 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:35:22.704705 1193189 cri.go:89] found id: ""
	I1209 04:35:22.704724 1193189 logs.go:282] 0 containers: []
	W1209 04:35:22.704731 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:35:22.704742 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:35:22.704815 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:35:22.729994 1193189 cri.go:89] found id: ""
	I1209 04:35:22.730010 1193189 logs.go:282] 0 containers: []
	W1209 04:35:22.730016 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:35:22.730021 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:35:22.730085 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:35:22.755372 1193189 cri.go:89] found id: ""
	I1209 04:35:22.755386 1193189 logs.go:282] 0 containers: []
	W1209 04:35:22.755393 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:35:22.755399 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:35:22.755468 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:35:22.781698 1193189 cri.go:89] found id: ""
	I1209 04:35:22.781712 1193189 logs.go:282] 0 containers: []
	W1209 04:35:22.781718 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:35:22.781724 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:35:22.781783 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:35:22.810395 1193189 cri.go:89] found id: ""
	I1209 04:35:22.810409 1193189 logs.go:282] 0 containers: []
	W1209 04:35:22.810417 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:35:22.810422 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:35:22.810491 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:35:22.834867 1193189 cri.go:89] found id: ""
	I1209 04:35:22.834881 1193189 logs.go:282] 0 containers: []
	W1209 04:35:22.834888 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:35:22.834896 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:35:22.834914 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:35:22.895493 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:35:22.895514 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:35:22.923338 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:35:22.923355 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:35:22.981048 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:35:22.981069 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:35:22.998202 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:35:22.998221 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:35:23.060221 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:35:23.052398   13038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:23.053078   13038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:23.054527   13038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:23.054989   13038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:23.056396   13038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:35:23.052398   13038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:23.053078   13038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:23.054527   13038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:23.054989   13038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:23.056396   13038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:35:25.561920 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:35:25.571773 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:35:25.571837 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:35:25.595193 1193189 cri.go:89] found id: ""
	I1209 04:35:25.595207 1193189 logs.go:282] 0 containers: []
	W1209 04:35:25.595215 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:35:25.595220 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:35:25.595285 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:35:25.619637 1193189 cri.go:89] found id: ""
	I1209 04:35:25.619651 1193189 logs.go:282] 0 containers: []
	W1209 04:35:25.619658 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:35:25.619664 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:35:25.619726 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:35:25.644298 1193189 cri.go:89] found id: ""
	I1209 04:35:25.644313 1193189 logs.go:282] 0 containers: []
	W1209 04:35:25.644319 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:35:25.644325 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:35:25.644384 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:35:25.668990 1193189 cri.go:89] found id: ""
	I1209 04:35:25.669003 1193189 logs.go:282] 0 containers: []
	W1209 04:35:25.669011 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:35:25.669016 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:35:25.669078 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:35:25.693184 1193189 cri.go:89] found id: ""
	I1209 04:35:25.693199 1193189 logs.go:282] 0 containers: []
	W1209 04:35:25.693206 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:35:25.693211 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:35:25.693269 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:35:25.718924 1193189 cri.go:89] found id: ""
	I1209 04:35:25.718939 1193189 logs.go:282] 0 containers: []
	W1209 04:35:25.718946 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:35:25.718951 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:35:25.719014 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:35:25.744270 1193189 cri.go:89] found id: ""
	I1209 04:35:25.744287 1193189 logs.go:282] 0 containers: []
	W1209 04:35:25.744294 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:35:25.744303 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:35:25.744313 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:35:25.775297 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:35:25.775312 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:35:25.830399 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:35:25.830417 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:35:25.846995 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:35:25.847011 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:35:25.907973 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:35:25.899536   13140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:25.899964   13140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:25.901112   13140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:25.902600   13140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:25.903121   13140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:35:25.899536   13140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:25.899964   13140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:25.901112   13140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:25.902600   13140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:25.903121   13140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:35:25.908000 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:35:25.908009 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:35:28.475800 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:35:28.486363 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:35:28.486434 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:35:28.511630 1193189 cri.go:89] found id: ""
	I1209 04:35:28.511649 1193189 logs.go:282] 0 containers: []
	W1209 04:35:28.511657 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:35:28.511662 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:35:28.511734 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:35:28.539616 1193189 cri.go:89] found id: ""
	I1209 04:35:28.539631 1193189 logs.go:282] 0 containers: []
	W1209 04:35:28.539638 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:35:28.539643 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:35:28.539704 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:35:28.563311 1193189 cri.go:89] found id: ""
	I1209 04:35:28.563325 1193189 logs.go:282] 0 containers: []
	W1209 04:35:28.563333 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:35:28.563338 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:35:28.563399 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:35:28.591490 1193189 cri.go:89] found id: ""
	I1209 04:35:28.591504 1193189 logs.go:282] 0 containers: []
	W1209 04:35:28.591511 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:35:28.591516 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:35:28.591574 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:35:28.614638 1193189 cri.go:89] found id: ""
	I1209 04:35:28.614653 1193189 logs.go:282] 0 containers: []
	W1209 04:35:28.614660 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:35:28.614665 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:35:28.614729 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:35:28.638698 1193189 cri.go:89] found id: ""
	I1209 04:35:28.638712 1193189 logs.go:282] 0 containers: []
	W1209 04:35:28.638720 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:35:28.638727 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:35:28.638788 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:35:28.665819 1193189 cri.go:89] found id: ""
	I1209 04:35:28.665837 1193189 logs.go:282] 0 containers: []
	W1209 04:35:28.665843 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:35:28.665851 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:35:28.665861 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:35:28.693372 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:35:28.693387 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:35:28.750183 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:35:28.750203 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:35:28.768641 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:35:28.768659 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:35:28.832332 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:35:28.823785   13239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:28.824261   13239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:28.826084   13239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:28.826770   13239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:28.828406   13239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:35:28.823785   13239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:28.824261   13239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:28.826084   13239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:28.826770   13239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:28.828406   13239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:35:28.832342 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:35:28.832352 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:35:31.394797 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:35:31.404399 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:35:31.404459 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:35:31.427865 1193189 cri.go:89] found id: ""
	I1209 04:35:31.427879 1193189 logs.go:282] 0 containers: []
	W1209 04:35:31.427886 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:35:31.427893 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:35:31.427957 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:35:31.465245 1193189 cri.go:89] found id: ""
	I1209 04:35:31.465259 1193189 logs.go:282] 0 containers: []
	W1209 04:35:31.465266 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:35:31.465271 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:35:31.465333 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:35:31.499189 1193189 cri.go:89] found id: ""
	I1209 04:35:31.499202 1193189 logs.go:282] 0 containers: []
	W1209 04:35:31.499209 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:35:31.499215 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:35:31.499272 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:35:31.525936 1193189 cri.go:89] found id: ""
	I1209 04:35:31.525950 1193189 logs.go:282] 0 containers: []
	W1209 04:35:31.525958 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:35:31.525963 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:35:31.526023 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:35:31.550933 1193189 cri.go:89] found id: ""
	I1209 04:35:31.550948 1193189 logs.go:282] 0 containers: []
	W1209 04:35:31.550955 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:35:31.550960 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:35:31.551019 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:35:31.574667 1193189 cri.go:89] found id: ""
	I1209 04:35:31.574681 1193189 logs.go:282] 0 containers: []
	W1209 04:35:31.574689 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:35:31.574694 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:35:31.574754 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:35:31.599346 1193189 cri.go:89] found id: ""
	I1209 04:35:31.599360 1193189 logs.go:282] 0 containers: []
	W1209 04:35:31.599367 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:35:31.599374 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:35:31.599384 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:35:31.625893 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:35:31.625912 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:35:31.681164 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:35:31.681181 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:35:31.697997 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:35:31.698014 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:35:31.765231 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:35:31.757080   13343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:31.757463   13343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:31.759010   13343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:31.759311   13343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:31.760784   13343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:35:31.757080   13343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:31.757463   13343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:31.759010   13343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:31.759311   13343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:31.760784   13343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:35:31.765242 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:35:31.765253 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:35:34.325149 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:35:34.334839 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:35:34.334897 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:35:34.359238 1193189 cri.go:89] found id: ""
	I1209 04:35:34.359251 1193189 logs.go:282] 0 containers: []
	W1209 04:35:34.359258 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:35:34.359263 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:35:34.359324 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:35:34.383217 1193189 cri.go:89] found id: ""
	I1209 04:35:34.383231 1193189 logs.go:282] 0 containers: []
	W1209 04:35:34.383237 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:35:34.383242 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:35:34.383301 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:35:34.407421 1193189 cri.go:89] found id: ""
	I1209 04:35:34.407435 1193189 logs.go:282] 0 containers: []
	W1209 04:35:34.407442 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:35:34.407454 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:35:34.407513 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:35:34.440852 1193189 cri.go:89] found id: ""
	I1209 04:35:34.440865 1193189 logs.go:282] 0 containers: []
	W1209 04:35:34.440872 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:35:34.440878 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:35:34.440938 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:35:34.474370 1193189 cri.go:89] found id: ""
	I1209 04:35:34.474382 1193189 logs.go:282] 0 containers: []
	W1209 04:35:34.474389 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:35:34.474400 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:35:34.474459 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:35:34.503074 1193189 cri.go:89] found id: ""
	I1209 04:35:34.503088 1193189 logs.go:282] 0 containers: []
	W1209 04:35:34.503095 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:35:34.503103 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:35:34.503160 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:35:34.533672 1193189 cri.go:89] found id: ""
	I1209 04:35:34.533686 1193189 logs.go:282] 0 containers: []
	W1209 04:35:34.533693 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:35:34.533701 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:35:34.533711 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:35:34.550119 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:35:34.550138 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:35:34.614817 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:35:34.606452   13437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:34.606849   13437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:34.608481   13437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:34.609159   13437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:34.610864   13437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:35:34.606452   13437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:34.606849   13437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:34.608481   13437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:34.609159   13437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:34.610864   13437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:35:34.614827 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:35:34.614837 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:35:34.677461 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:35:34.677482 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:35:34.703505 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:35:34.703520 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:35:37.258780 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:35:37.268941 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:35:37.269002 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:35:37.292668 1193189 cri.go:89] found id: ""
	I1209 04:35:37.292682 1193189 logs.go:282] 0 containers: []
	W1209 04:35:37.292689 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:35:37.292694 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:35:37.292757 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:35:37.320157 1193189 cri.go:89] found id: ""
	I1209 04:35:37.320171 1193189 logs.go:282] 0 containers: []
	W1209 04:35:37.320177 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:35:37.320183 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:35:37.320240 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:35:37.343858 1193189 cri.go:89] found id: ""
	I1209 04:35:37.343872 1193189 logs.go:282] 0 containers: []
	W1209 04:35:37.343879 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:35:37.343884 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:35:37.343947 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:35:37.366919 1193189 cri.go:89] found id: ""
	I1209 04:35:37.366932 1193189 logs.go:282] 0 containers: []
	W1209 04:35:37.366939 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:35:37.366945 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:35:37.367003 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:35:37.391330 1193189 cri.go:89] found id: ""
	I1209 04:35:37.391344 1193189 logs.go:282] 0 containers: []
	W1209 04:35:37.391351 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:35:37.391356 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:35:37.391417 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:35:37.414885 1193189 cri.go:89] found id: ""
	I1209 04:35:37.414899 1193189 logs.go:282] 0 containers: []
	W1209 04:35:37.414906 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:35:37.414911 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:35:37.414967 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:35:37.440557 1193189 cri.go:89] found id: ""
	I1209 04:35:37.440570 1193189 logs.go:282] 0 containers: []
	W1209 04:35:37.440577 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:35:37.440585 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:35:37.440595 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:35:37.501076 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:35:37.501094 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:35:37.523552 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:35:37.523569 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:35:37.590387 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:35:37.582017   13547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:37.582700   13547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:37.584424   13547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:37.584939   13547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:37.586547   13547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:35:37.582017   13547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:37.582700   13547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:37.584424   13547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:37.584939   13547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:37.586547   13547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:35:37.590397 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:35:37.590408 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:35:37.653090 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:35:37.653108 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:35:40.184839 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:35:40.195112 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:35:40.195177 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:35:40.221158 1193189 cri.go:89] found id: ""
	I1209 04:35:40.221173 1193189 logs.go:282] 0 containers: []
	W1209 04:35:40.221180 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:35:40.221185 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:35:40.221246 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:35:40.246395 1193189 cri.go:89] found id: ""
	I1209 04:35:40.246415 1193189 logs.go:282] 0 containers: []
	W1209 04:35:40.246422 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:35:40.246428 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:35:40.246487 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:35:40.270697 1193189 cri.go:89] found id: ""
	I1209 04:35:40.270711 1193189 logs.go:282] 0 containers: []
	W1209 04:35:40.270718 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:35:40.270723 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:35:40.270781 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:35:40.295006 1193189 cri.go:89] found id: ""
	I1209 04:35:40.295021 1193189 logs.go:282] 0 containers: []
	W1209 04:35:40.295028 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:35:40.295033 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:35:40.295093 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:35:40.319784 1193189 cri.go:89] found id: ""
	I1209 04:35:40.319797 1193189 logs.go:282] 0 containers: []
	W1209 04:35:40.319804 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:35:40.319810 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:35:40.319872 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:35:40.344094 1193189 cri.go:89] found id: ""
	I1209 04:35:40.344108 1193189 logs.go:282] 0 containers: []
	W1209 04:35:40.344115 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:35:40.344120 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:35:40.344181 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:35:40.368626 1193189 cri.go:89] found id: ""
	I1209 04:35:40.368640 1193189 logs.go:282] 0 containers: []
	W1209 04:35:40.368647 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:35:40.368654 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:35:40.368665 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:35:40.423837 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:35:40.423857 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:35:40.452134 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:35:40.452157 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:35:40.527559 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:35:40.519583   13649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:40.519986   13649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:40.521271   13649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:40.521835   13649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:40.523570   13649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:35:40.519583   13649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:40.519986   13649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:40.521271   13649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:40.521835   13649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:40.523570   13649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:35:40.527610 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:35:40.527620 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:35:40.588474 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:35:40.588495 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:35:43.118634 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:35:43.128671 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:35:43.128738 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:35:43.152143 1193189 cri.go:89] found id: ""
	I1209 04:35:43.152158 1193189 logs.go:282] 0 containers: []
	W1209 04:35:43.152179 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:35:43.152185 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:35:43.152255 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:35:43.176188 1193189 cri.go:89] found id: ""
	I1209 04:35:43.176203 1193189 logs.go:282] 0 containers: []
	W1209 04:35:43.176210 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:35:43.176215 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:35:43.176275 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:35:43.199682 1193189 cri.go:89] found id: ""
	I1209 04:35:43.199696 1193189 logs.go:282] 0 containers: []
	W1209 04:35:43.199702 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:35:43.199707 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:35:43.199767 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:35:43.224229 1193189 cri.go:89] found id: ""
	I1209 04:35:43.224244 1193189 logs.go:282] 0 containers: []
	W1209 04:35:43.224251 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:35:43.224257 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:35:43.224318 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:35:43.249684 1193189 cri.go:89] found id: ""
	I1209 04:35:43.249698 1193189 logs.go:282] 0 containers: []
	W1209 04:35:43.249705 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:35:43.249710 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:35:43.249773 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:35:43.273701 1193189 cri.go:89] found id: ""
	I1209 04:35:43.273715 1193189 logs.go:282] 0 containers: []
	W1209 04:35:43.273724 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:35:43.273729 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:35:43.273790 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:35:43.297360 1193189 cri.go:89] found id: ""
	I1209 04:35:43.297375 1193189 logs.go:282] 0 containers: []
	W1209 04:35:43.297382 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:35:43.297389 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:35:43.297400 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:35:43.323849 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:35:43.323865 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:35:43.380806 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:35:43.380825 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:35:43.397905 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:35:43.397924 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:35:43.474648 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:35:43.464143   13758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:43.464857   13758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:43.468210   13758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:43.468799   13758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:43.470475   13758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:35:43.464143   13758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:43.464857   13758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:43.468210   13758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:43.468799   13758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:43.470475   13758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:35:43.474658 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:35:43.474668 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:35:46.038037 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:35:46.048448 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:35:46.048513 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:35:46.073156 1193189 cri.go:89] found id: ""
	I1209 04:35:46.073170 1193189 logs.go:282] 0 containers: []
	W1209 04:35:46.073177 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:35:46.073182 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:35:46.073246 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:35:46.103227 1193189 cri.go:89] found id: ""
	I1209 04:35:46.103242 1193189 logs.go:282] 0 containers: []
	W1209 04:35:46.103249 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:35:46.103255 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:35:46.103324 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:35:46.126371 1193189 cri.go:89] found id: ""
	I1209 04:35:46.126385 1193189 logs.go:282] 0 containers: []
	W1209 04:35:46.126392 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:35:46.126397 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:35:46.126457 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:35:46.151271 1193189 cri.go:89] found id: ""
	I1209 04:35:46.151284 1193189 logs.go:282] 0 containers: []
	W1209 04:35:46.151291 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:35:46.151296 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:35:46.151354 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:35:46.175057 1193189 cri.go:89] found id: ""
	I1209 04:35:46.175071 1193189 logs.go:282] 0 containers: []
	W1209 04:35:46.175077 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:35:46.175082 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:35:46.175140 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:35:46.203063 1193189 cri.go:89] found id: ""
	I1209 04:35:46.203078 1193189 logs.go:282] 0 containers: []
	W1209 04:35:46.203085 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:35:46.203091 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:35:46.203148 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:35:46.229251 1193189 cri.go:89] found id: ""
	I1209 04:35:46.229267 1193189 logs.go:282] 0 containers: []
	W1209 04:35:46.229274 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:35:46.229281 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:35:46.229291 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:35:46.298699 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:35:46.289900   13846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:46.290515   13846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:46.292304   13846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:46.292640   13846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:46.294235   13846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:35:46.289900   13846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:46.290515   13846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:46.292304   13846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:46.292640   13846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:46.294235   13846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:35:46.298709 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:35:46.298720 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:35:46.363949 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:35:46.363976 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:35:46.391889 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:35:46.391906 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:35:46.454456 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:35:46.454483 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:35:48.975649 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:35:48.985708 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:35:48.985766 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:35:49.011399 1193189 cri.go:89] found id: ""
	I1209 04:35:49.011413 1193189 logs.go:282] 0 containers: []
	W1209 04:35:49.011420 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:35:49.011426 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:35:49.011483 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:35:49.036873 1193189 cri.go:89] found id: ""
	I1209 04:35:49.036887 1193189 logs.go:282] 0 containers: []
	W1209 04:35:49.036894 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:35:49.036899 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:35:49.036960 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:35:49.066005 1193189 cri.go:89] found id: ""
	I1209 04:35:49.066019 1193189 logs.go:282] 0 containers: []
	W1209 04:35:49.066025 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:35:49.066031 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:35:49.066091 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:35:49.093270 1193189 cri.go:89] found id: ""
	I1209 04:35:49.093284 1193189 logs.go:282] 0 containers: []
	W1209 04:35:49.093291 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:35:49.093297 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:35:49.093357 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:35:49.116583 1193189 cri.go:89] found id: ""
	I1209 04:35:49.116597 1193189 logs.go:282] 0 containers: []
	W1209 04:35:49.116604 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:35:49.116609 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:35:49.116667 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:35:49.141295 1193189 cri.go:89] found id: ""
	I1209 04:35:49.141309 1193189 logs.go:282] 0 containers: []
	W1209 04:35:49.141316 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:35:49.141321 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:35:49.141382 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:35:49.164496 1193189 cri.go:89] found id: ""
	I1209 04:35:49.164509 1193189 logs.go:282] 0 containers: []
	W1209 04:35:49.164516 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:35:49.164524 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:35:49.164533 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:35:49.220406 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:35:49.220426 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:35:49.237143 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:35:49.237159 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:35:49.305702 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:35:49.296253   13955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:49.297596   13955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:49.298689   13955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:49.299456   13955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:49.301121   13955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:35:49.296253   13955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:49.297596   13955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:49.298689   13955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:49.299456   13955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:49.301121   13955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:35:49.305724 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:35:49.305737 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:35:49.367200 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:35:49.367219 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:35:51.895283 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:35:51.905706 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:35:51.905765 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:35:51.929677 1193189 cri.go:89] found id: ""
	I1209 04:35:51.929691 1193189 logs.go:282] 0 containers: []
	W1209 04:35:51.929698 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:35:51.929703 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:35:51.929764 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:35:51.953232 1193189 cri.go:89] found id: ""
	I1209 04:35:51.953246 1193189 logs.go:282] 0 containers: []
	W1209 04:35:51.953252 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:35:51.953257 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:35:51.953314 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:35:51.979515 1193189 cri.go:89] found id: ""
	I1209 04:35:51.979528 1193189 logs.go:282] 0 containers: []
	W1209 04:35:51.979535 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:35:51.979540 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:35:51.979601 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:35:52.009061 1193189 cri.go:89] found id: ""
	I1209 04:35:52.009075 1193189 logs.go:282] 0 containers: []
	W1209 04:35:52.009082 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:35:52.009087 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:35:52.009154 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:35:52.036289 1193189 cri.go:89] found id: ""
	I1209 04:35:52.036309 1193189 logs.go:282] 0 containers: []
	W1209 04:35:52.036316 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:35:52.036321 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:35:52.036386 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:35:52.061853 1193189 cri.go:89] found id: ""
	I1209 04:35:52.061867 1193189 logs.go:282] 0 containers: []
	W1209 04:35:52.061874 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:35:52.061879 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:35:52.061942 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:35:52.090416 1193189 cri.go:89] found id: ""
	I1209 04:35:52.090443 1193189 logs.go:282] 0 containers: []
	W1209 04:35:52.090451 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:35:52.090459 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:35:52.090469 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:35:52.120980 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:35:52.120996 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:35:52.177079 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:35:52.177098 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:35:52.195520 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:35:52.195537 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:35:52.260151 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:35:52.251913   14072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:52.252734   14072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:52.254403   14072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:52.254982   14072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:52.256470   14072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:35:52.251913   14072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:52.252734   14072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:52.254403   14072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:52.254982   14072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:52.256470   14072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:35:52.260161 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:35:52.260172 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:35:54.821803 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:35:54.831356 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:35:54.831415 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:35:54.855283 1193189 cri.go:89] found id: ""
	I1209 04:35:54.855298 1193189 logs.go:282] 0 containers: []
	W1209 04:35:54.855304 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:35:54.855309 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:35:54.855369 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:35:54.889160 1193189 cri.go:89] found id: ""
	I1209 04:35:54.889174 1193189 logs.go:282] 0 containers: []
	W1209 04:35:54.889181 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:35:54.889186 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:35:54.889245 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:35:54.912925 1193189 cri.go:89] found id: ""
	I1209 04:35:54.912939 1193189 logs.go:282] 0 containers: []
	W1209 04:35:54.912946 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:35:54.912951 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:35:54.913019 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:35:54.937856 1193189 cri.go:89] found id: ""
	I1209 04:35:54.937869 1193189 logs.go:282] 0 containers: []
	W1209 04:35:54.937876 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:35:54.937881 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:35:54.937939 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:35:54.961607 1193189 cri.go:89] found id: ""
	I1209 04:35:54.961620 1193189 logs.go:282] 0 containers: []
	W1209 04:35:54.961626 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:35:54.961632 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:35:54.961692 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:35:54.984614 1193189 cri.go:89] found id: ""
	I1209 04:35:54.984627 1193189 logs.go:282] 0 containers: []
	W1209 04:35:54.984634 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:35:54.984639 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:35:54.984702 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:35:55.019938 1193189 cri.go:89] found id: ""
	I1209 04:35:55.019952 1193189 logs.go:282] 0 containers: []
	W1209 04:35:55.019959 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:35:55.019967 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:35:55.019977 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:35:55.076703 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:35:55.076722 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:35:55.094781 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:35:55.094801 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:35:55.164076 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:35:55.155994   14165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:55.156899   14165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:55.158415   14165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:55.158819   14165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:55.160056   14165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:35:55.155994   14165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:55.156899   14165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:55.158415   14165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:55.158819   14165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:55.160056   14165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:35:55.164088 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:35:55.164098 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:35:55.225429 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:35:55.225451 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:35:57.756131 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:35:57.766096 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:35:57.766152 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:35:57.794059 1193189 cri.go:89] found id: ""
	I1209 04:35:57.794073 1193189 logs.go:282] 0 containers: []
	W1209 04:35:57.794080 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:35:57.794085 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:35:57.794142 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:35:57.817501 1193189 cri.go:89] found id: ""
	I1209 04:35:57.817514 1193189 logs.go:282] 0 containers: []
	W1209 04:35:57.817520 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:35:57.817526 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:35:57.817582 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:35:57.841800 1193189 cri.go:89] found id: ""
	I1209 04:35:57.841814 1193189 logs.go:282] 0 containers: []
	W1209 04:35:57.841821 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:35:57.841841 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:35:57.841905 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:35:57.865096 1193189 cri.go:89] found id: ""
	I1209 04:35:57.865109 1193189 logs.go:282] 0 containers: []
	W1209 04:35:57.865116 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:35:57.865122 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:35:57.865185 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:35:57.889214 1193189 cri.go:89] found id: ""
	I1209 04:35:57.889227 1193189 logs.go:282] 0 containers: []
	W1209 04:35:57.889234 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:35:57.889240 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:35:57.889299 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:35:57.913077 1193189 cri.go:89] found id: ""
	I1209 04:35:57.913090 1193189 logs.go:282] 0 containers: []
	W1209 04:35:57.913097 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:35:57.913102 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:35:57.913164 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:35:57.938101 1193189 cri.go:89] found id: ""
	I1209 04:35:57.938114 1193189 logs.go:282] 0 containers: []
	W1209 04:35:57.938121 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:35:57.938129 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:35:57.938139 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:35:57.968546 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:35:57.968563 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:35:58.025605 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:35:58.025626 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:35:58.042537 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:35:58.042554 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:35:58.112285 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:35:58.104144   14281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:58.104837   14281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:58.106385   14281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:58.106802   14281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:58.108456   14281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:35:58.104144   14281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:58.104837   14281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:58.106385   14281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:58.106802   14281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:58.108456   14281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:35:58.112295 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:35:58.112317 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:36:00.674623 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:36:00.684871 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:36:00.684932 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:36:00.723046 1193189 cri.go:89] found id: ""
	I1209 04:36:00.723060 1193189 logs.go:282] 0 containers: []
	W1209 04:36:00.723067 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:36:00.723082 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:36:00.723142 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:36:00.755063 1193189 cri.go:89] found id: ""
	I1209 04:36:00.755077 1193189 logs.go:282] 0 containers: []
	W1209 04:36:00.755094 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:36:00.755100 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:36:00.755170 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:36:00.780343 1193189 cri.go:89] found id: ""
	I1209 04:36:00.780357 1193189 logs.go:282] 0 containers: []
	W1209 04:36:00.780368 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:36:00.780373 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:36:00.780432 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:36:00.805177 1193189 cri.go:89] found id: ""
	I1209 04:36:00.805191 1193189 logs.go:282] 0 containers: []
	W1209 04:36:00.805198 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:36:00.805203 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:36:00.805261 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:36:00.829413 1193189 cri.go:89] found id: ""
	I1209 04:36:00.829426 1193189 logs.go:282] 0 containers: []
	W1209 04:36:00.829432 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:36:00.829439 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:36:00.829500 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:36:00.853086 1193189 cri.go:89] found id: ""
	I1209 04:36:00.853100 1193189 logs.go:282] 0 containers: []
	W1209 04:36:00.853107 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:36:00.853112 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:36:00.853185 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:36:00.881064 1193189 cri.go:89] found id: ""
	I1209 04:36:00.881078 1193189 logs.go:282] 0 containers: []
	W1209 04:36:00.881085 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:36:00.881093 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:36:00.881103 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:36:00.950102 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:36:00.942130   14365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:00.942767   14365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:00.944430   14365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:00.944779   14365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:00.946290   14365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:36:00.942130   14365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:00.942767   14365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:00.944430   14365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:00.944779   14365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:00.946290   14365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:36:00.950112 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:36:00.950123 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:36:01.012065 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:36:01.012086 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:36:01.041323 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:36:01.041339 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:36:01.099024 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:36:01.099044 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:36:03.616785 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:36:03.626636 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:36:03.626697 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:36:03.650973 1193189 cri.go:89] found id: ""
	I1209 04:36:03.650987 1193189 logs.go:282] 0 containers: []
	W1209 04:36:03.650994 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:36:03.650999 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:36:03.651060 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:36:03.674678 1193189 cri.go:89] found id: ""
	I1209 04:36:03.674692 1193189 logs.go:282] 0 containers: []
	W1209 04:36:03.674699 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:36:03.674705 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:36:03.674777 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:36:03.705193 1193189 cri.go:89] found id: ""
	I1209 04:36:03.705206 1193189 logs.go:282] 0 containers: []
	W1209 04:36:03.705213 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:36:03.705218 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:36:03.705281 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:36:03.733013 1193189 cri.go:89] found id: ""
	I1209 04:36:03.733026 1193189 logs.go:282] 0 containers: []
	W1209 04:36:03.733033 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:36:03.733038 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:36:03.733096 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:36:03.770375 1193189 cri.go:89] found id: ""
	I1209 04:36:03.770389 1193189 logs.go:282] 0 containers: []
	W1209 04:36:03.770396 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:36:03.770401 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:36:03.770457 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:36:03.793967 1193189 cri.go:89] found id: ""
	I1209 04:36:03.793980 1193189 logs.go:282] 0 containers: []
	W1209 04:36:03.793987 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:36:03.793992 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:36:03.794053 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:36:03.818652 1193189 cri.go:89] found id: ""
	I1209 04:36:03.818666 1193189 logs.go:282] 0 containers: []
	W1209 04:36:03.818672 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:36:03.818681 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:36:03.818691 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:36:03.873671 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:36:03.873692 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:36:03.890142 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:36:03.890159 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:36:03.958206 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:36:03.950384   14475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:03.950766   14475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:03.952402   14475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:03.952806   14475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:03.954365   14475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:36:03.950384   14475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:03.950766   14475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:03.952402   14475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:03.952806   14475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:03.954365   14475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:36:03.958216 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:36:03.958227 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:36:04.019401 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:36:04.019421 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:36:06.551878 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:36:06.561600 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:36:06.561657 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:36:06.585277 1193189 cri.go:89] found id: ""
	I1209 04:36:06.585291 1193189 logs.go:282] 0 containers: []
	W1209 04:36:06.585298 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:36:06.585304 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:36:06.585366 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:36:06.613401 1193189 cri.go:89] found id: ""
	I1209 04:36:06.613415 1193189 logs.go:282] 0 containers: []
	W1209 04:36:06.613421 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:36:06.613426 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:36:06.613483 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:36:06.642329 1193189 cri.go:89] found id: ""
	I1209 04:36:06.642342 1193189 logs.go:282] 0 containers: []
	W1209 04:36:06.642349 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:36:06.642354 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:36:06.642413 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:36:06.666445 1193189 cri.go:89] found id: ""
	I1209 04:36:06.666458 1193189 logs.go:282] 0 containers: []
	W1209 04:36:06.666465 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:36:06.666470 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:36:06.666527 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:36:06.695405 1193189 cri.go:89] found id: ""
	I1209 04:36:06.695419 1193189 logs.go:282] 0 containers: []
	W1209 04:36:06.695425 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:36:06.695431 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:36:06.695488 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:36:06.734331 1193189 cri.go:89] found id: ""
	I1209 04:36:06.734345 1193189 logs.go:282] 0 containers: []
	W1209 04:36:06.734361 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:36:06.734372 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:36:06.734441 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:36:06.766210 1193189 cri.go:89] found id: ""
	I1209 04:36:06.766223 1193189 logs.go:282] 0 containers: []
	W1209 04:36:06.766231 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:36:06.766238 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:36:06.766248 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:36:06.822607 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:36:06.822627 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:36:06.839326 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:36:06.839342 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:36:06.900387 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:36:06.892243   14581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:06.892630   14581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:06.894401   14581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:06.894869   14581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:06.896343   14581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:36:06.892243   14581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:06.892630   14581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:06.894401   14581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:06.894869   14581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:06.896343   14581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:36:06.900405 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:36:06.900421 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:36:06.961047 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:36:06.961067 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:36:09.488140 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:36:09.498332 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:36:09.498409 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:36:09.523347 1193189 cri.go:89] found id: ""
	I1209 04:36:09.523373 1193189 logs.go:282] 0 containers: []
	W1209 04:36:09.523380 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:36:09.523387 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:36:09.523459 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:36:09.550096 1193189 cri.go:89] found id: ""
	I1209 04:36:09.550111 1193189 logs.go:282] 0 containers: []
	W1209 04:36:09.550117 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:36:09.550123 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:36:09.550185 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:36:09.578695 1193189 cri.go:89] found id: ""
	I1209 04:36:09.578709 1193189 logs.go:282] 0 containers: []
	W1209 04:36:09.578715 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:36:09.578720 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:36:09.578784 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:36:09.607079 1193189 cri.go:89] found id: ""
	I1209 04:36:09.607093 1193189 logs.go:282] 0 containers: []
	W1209 04:36:09.607100 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:36:09.607105 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:36:09.607166 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:36:09.635495 1193189 cri.go:89] found id: ""
	I1209 04:36:09.635510 1193189 logs.go:282] 0 containers: []
	W1209 04:36:09.635516 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:36:09.635521 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:36:09.635584 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:36:09.661747 1193189 cri.go:89] found id: ""
	I1209 04:36:09.661761 1193189 logs.go:282] 0 containers: []
	W1209 04:36:09.661767 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:36:09.661773 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:36:09.661831 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:36:09.694535 1193189 cri.go:89] found id: ""
	I1209 04:36:09.694549 1193189 logs.go:282] 0 containers: []
	W1209 04:36:09.694556 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:36:09.694564 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:36:09.694574 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:36:09.759636 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:36:09.759656 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:36:09.777485 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:36:09.777502 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:36:09.841963 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:36:09.834188   14687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:09.834610   14687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:09.836196   14687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:09.836779   14687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:09.838239   14687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:36:09.834188   14687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:09.834610   14687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:09.836196   14687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:09.836779   14687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:09.838239   14687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:36:09.841974 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:36:09.841984 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:36:09.904615 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:36:09.904636 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:36:12.433539 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:36:12.443370 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:36:12.443435 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:36:12.469616 1193189 cri.go:89] found id: ""
	I1209 04:36:12.469630 1193189 logs.go:282] 0 containers: []
	W1209 04:36:12.469637 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:36:12.469643 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:36:12.469704 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:36:12.493917 1193189 cri.go:89] found id: ""
	I1209 04:36:12.493930 1193189 logs.go:282] 0 containers: []
	W1209 04:36:12.493937 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:36:12.493942 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:36:12.494001 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:36:12.518803 1193189 cri.go:89] found id: ""
	I1209 04:36:12.518817 1193189 logs.go:282] 0 containers: []
	W1209 04:36:12.518842 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:36:12.518848 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:36:12.518917 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:36:12.542764 1193189 cri.go:89] found id: ""
	I1209 04:36:12.542785 1193189 logs.go:282] 0 containers: []
	W1209 04:36:12.542792 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:36:12.542797 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:36:12.542859 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:36:12.566738 1193189 cri.go:89] found id: ""
	I1209 04:36:12.566751 1193189 logs.go:282] 0 containers: []
	W1209 04:36:12.566758 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:36:12.566762 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:36:12.566830 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:36:12.594757 1193189 cri.go:89] found id: ""
	I1209 04:36:12.594772 1193189 logs.go:282] 0 containers: []
	W1209 04:36:12.594778 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:36:12.594784 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:36:12.594850 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:36:12.619407 1193189 cri.go:89] found id: ""
	I1209 04:36:12.619421 1193189 logs.go:282] 0 containers: []
	W1209 04:36:12.619427 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:36:12.619434 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:36:12.619445 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:36:12.692974 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:36:12.683791   14780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:12.684626   14780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:12.686439   14780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:12.687100   14780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:12.688999   14780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:36:12.683791   14780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:12.684626   14780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:12.686439   14780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:12.687100   14780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:12.688999   14780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:36:12.692984 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:36:12.693001 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:36:12.766313 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:36:12.766340 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:36:12.793057 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:36:12.793075 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:36:12.849665 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:36:12.849689 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:36:15.366796 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:36:15.376649 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:36:15.376719 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:36:15.400344 1193189 cri.go:89] found id: ""
	I1209 04:36:15.400358 1193189 logs.go:282] 0 containers: []
	W1209 04:36:15.400372 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:36:15.400378 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:36:15.400437 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:36:15.425809 1193189 cri.go:89] found id: ""
	I1209 04:36:15.425822 1193189 logs.go:282] 0 containers: []
	W1209 04:36:15.425829 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:36:15.425834 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:36:15.425894 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:36:15.450444 1193189 cri.go:89] found id: ""
	I1209 04:36:15.450458 1193189 logs.go:282] 0 containers: []
	W1209 04:36:15.450466 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:36:15.450471 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:36:15.450531 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:36:15.478163 1193189 cri.go:89] found id: ""
	I1209 04:36:15.478178 1193189 logs.go:282] 0 containers: []
	W1209 04:36:15.478185 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:36:15.478190 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:36:15.478261 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:36:15.502360 1193189 cri.go:89] found id: ""
	I1209 04:36:15.502374 1193189 logs.go:282] 0 containers: []
	W1209 04:36:15.502381 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:36:15.502386 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:36:15.502450 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:36:15.530599 1193189 cri.go:89] found id: ""
	I1209 04:36:15.530614 1193189 logs.go:282] 0 containers: []
	W1209 04:36:15.530620 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:36:15.530626 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:36:15.530693 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:36:15.554654 1193189 cri.go:89] found id: ""
	I1209 04:36:15.554668 1193189 logs.go:282] 0 containers: []
	W1209 04:36:15.554675 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:36:15.554683 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:36:15.554693 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:36:15.614962 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:36:15.614982 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:36:15.641417 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:36:15.641433 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:36:15.696674 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:36:15.696692 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:36:15.714032 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:36:15.714047 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:36:15.786226 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:36:15.778061   14907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:15.778499   14907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:15.780149   14907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:15.780759   14907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:15.782381   14907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:36:15.778061   14907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:15.778499   14907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:15.780149   14907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:15.780759   14907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:15.782381   14907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:36:18.286483 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:36:18.296288 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:36:18.296346 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:36:18.323616 1193189 cri.go:89] found id: ""
	I1209 04:36:18.323629 1193189 logs.go:282] 0 containers: []
	W1209 04:36:18.323636 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:36:18.323642 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:36:18.323706 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:36:18.348203 1193189 cri.go:89] found id: ""
	I1209 04:36:18.348218 1193189 logs.go:282] 0 containers: []
	W1209 04:36:18.348225 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:36:18.348231 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:36:18.348290 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:36:18.372639 1193189 cri.go:89] found id: ""
	I1209 04:36:18.372653 1193189 logs.go:282] 0 containers: []
	W1209 04:36:18.372660 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:36:18.372671 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:36:18.372732 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:36:18.400006 1193189 cri.go:89] found id: ""
	I1209 04:36:18.400037 1193189 logs.go:282] 0 containers: []
	W1209 04:36:18.400044 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:36:18.400049 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:36:18.400120 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:36:18.424038 1193189 cri.go:89] found id: ""
	I1209 04:36:18.424053 1193189 logs.go:282] 0 containers: []
	W1209 04:36:18.424060 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:36:18.424068 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:36:18.424135 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:36:18.447692 1193189 cri.go:89] found id: ""
	I1209 04:36:18.447719 1193189 logs.go:282] 0 containers: []
	W1209 04:36:18.447726 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:36:18.447737 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:36:18.447809 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:36:18.473888 1193189 cri.go:89] found id: ""
	I1209 04:36:18.473902 1193189 logs.go:282] 0 containers: []
	W1209 04:36:18.473908 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:36:18.473916 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:36:18.473925 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:36:18.531920 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:36:18.531945 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:36:18.549523 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:36:18.549540 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:36:18.610296 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:36:18.601988   14993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:18.602374   14993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:18.603904   14993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:18.604520   14993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:18.606270   14993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:36:18.601988   14993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:18.602374   14993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:18.603904   14993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:18.604520   14993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:18.606270   14993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:36:18.610306 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:36:18.610316 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:36:18.673185 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:36:18.673204 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:36:21.215945 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:36:21.225779 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:36:21.225842 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:36:21.251614 1193189 cri.go:89] found id: ""
	I1209 04:36:21.251627 1193189 logs.go:282] 0 containers: []
	W1209 04:36:21.251633 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:36:21.251639 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:36:21.251701 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:36:21.274988 1193189 cri.go:89] found id: ""
	I1209 04:36:21.275002 1193189 logs.go:282] 0 containers: []
	W1209 04:36:21.275009 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:36:21.275016 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:36:21.275073 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:36:21.298100 1193189 cri.go:89] found id: ""
	I1209 04:36:21.298113 1193189 logs.go:282] 0 containers: []
	W1209 04:36:21.298120 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:36:21.298125 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:36:21.298188 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:36:21.323043 1193189 cri.go:89] found id: ""
	I1209 04:36:21.323057 1193189 logs.go:282] 0 containers: []
	W1209 04:36:21.323063 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:36:21.323068 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:36:21.323128 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:36:21.346629 1193189 cri.go:89] found id: ""
	I1209 04:36:21.346642 1193189 logs.go:282] 0 containers: []
	W1209 04:36:21.346649 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:36:21.346654 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:36:21.346713 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:36:21.370687 1193189 cri.go:89] found id: ""
	I1209 04:36:21.370700 1193189 logs.go:282] 0 containers: []
	W1209 04:36:21.370707 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:36:21.370712 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:36:21.370767 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:36:21.394774 1193189 cri.go:89] found id: ""
	I1209 04:36:21.394788 1193189 logs.go:282] 0 containers: []
	W1209 04:36:21.394794 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:36:21.394803 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:36:21.394813 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:36:21.458240 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:36:21.449927   15094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:21.450664   15094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:21.452537   15094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:21.452900   15094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:21.454442   15094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:36:21.449927   15094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:21.450664   15094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:21.452537   15094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:21.452900   15094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:21.454442   15094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:36:21.458249 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:36:21.458260 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:36:21.519830 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:36:21.519850 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:36:21.556076 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:36:21.556093 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:36:21.614749 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:36:21.614769 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:36:24.132222 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:36:24.143277 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:36:24.143352 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:36:24.173051 1193189 cri.go:89] found id: ""
	I1209 04:36:24.173065 1193189 logs.go:282] 0 containers: []
	W1209 04:36:24.173072 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:36:24.173077 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:36:24.173134 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:36:24.198407 1193189 cri.go:89] found id: ""
	I1209 04:36:24.198421 1193189 logs.go:282] 0 containers: []
	W1209 04:36:24.198428 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:36:24.198432 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:36:24.198490 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:36:24.224986 1193189 cri.go:89] found id: ""
	I1209 04:36:24.225000 1193189 logs.go:282] 0 containers: []
	W1209 04:36:24.225007 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:36:24.225012 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:36:24.225071 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:36:24.249942 1193189 cri.go:89] found id: ""
	I1209 04:36:24.249957 1193189 logs.go:282] 0 containers: []
	W1209 04:36:24.249964 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:36:24.249969 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:36:24.250031 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:36:24.274252 1193189 cri.go:89] found id: ""
	I1209 04:36:24.274266 1193189 logs.go:282] 0 containers: []
	W1209 04:36:24.274273 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:36:24.274278 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:36:24.274347 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:36:24.302468 1193189 cri.go:89] found id: ""
	I1209 04:36:24.302485 1193189 logs.go:282] 0 containers: []
	W1209 04:36:24.302491 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:36:24.302497 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:36:24.302582 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:36:24.328883 1193189 cri.go:89] found id: ""
	I1209 04:36:24.328898 1193189 logs.go:282] 0 containers: []
	W1209 04:36:24.328905 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:36:24.328913 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:36:24.328923 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:36:24.386082 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:36:24.386102 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:36:24.403782 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:36:24.403798 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:36:24.473588 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:36:24.462744   15202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:24.463330   15202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:24.466259   15202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:24.467650   15202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:24.468411   15202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:36:24.462744   15202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:24.463330   15202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:24.466259   15202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:24.467650   15202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:24.468411   15202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:36:24.473598 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:36:24.473609 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:36:24.534819 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:36:24.534841 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:36:27.064221 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:36:27.074260 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:36:27.074334 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:36:27.098417 1193189 cri.go:89] found id: ""
	I1209 04:36:27.098445 1193189 logs.go:282] 0 containers: []
	W1209 04:36:27.098452 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:36:27.098457 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:36:27.098527 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:36:27.126158 1193189 cri.go:89] found id: ""
	I1209 04:36:27.126172 1193189 logs.go:282] 0 containers: []
	W1209 04:36:27.126184 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:36:27.126189 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:36:27.126250 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:36:27.154258 1193189 cri.go:89] found id: ""
	I1209 04:36:27.154271 1193189 logs.go:282] 0 containers: []
	W1209 04:36:27.154278 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:36:27.154284 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:36:27.154343 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:36:27.179273 1193189 cri.go:89] found id: ""
	I1209 04:36:27.179286 1193189 logs.go:282] 0 containers: []
	W1209 04:36:27.179293 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:36:27.179309 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:36:27.179367 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:36:27.204706 1193189 cri.go:89] found id: ""
	I1209 04:36:27.204720 1193189 logs.go:282] 0 containers: []
	W1209 04:36:27.204727 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:36:27.204732 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:36:27.204791 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:36:27.230005 1193189 cri.go:89] found id: ""
	I1209 04:36:27.230019 1193189 logs.go:282] 0 containers: []
	W1209 04:36:27.230026 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:36:27.230032 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:36:27.230098 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:36:27.254482 1193189 cri.go:89] found id: ""
	I1209 04:36:27.254496 1193189 logs.go:282] 0 containers: []
	W1209 04:36:27.254512 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:36:27.254521 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:36:27.254531 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:36:27.310002 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:36:27.310022 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:36:27.327694 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:36:27.327713 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:36:27.395258 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:36:27.386987   15311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:27.387759   15311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:27.389469   15311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:27.389968   15311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:27.391467   15311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:36:27.386987   15311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:27.387759   15311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:27.389469   15311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:27.389968   15311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:27.391467   15311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:36:27.395269 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:36:27.395279 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:36:27.457675 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:36:27.457694 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:36:29.986185 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:36:30.005634 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:36:30.005711 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:36:30.038694 1193189 cri.go:89] found id: ""
	I1209 04:36:30.038709 1193189 logs.go:282] 0 containers: []
	W1209 04:36:30.038717 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:36:30.038723 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:36:30.038792 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:36:30.065088 1193189 cri.go:89] found id: ""
	I1209 04:36:30.065110 1193189 logs.go:282] 0 containers: []
	W1209 04:36:30.065119 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:36:30.065124 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:36:30.065188 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:36:30.090159 1193189 cri.go:89] found id: ""
	I1209 04:36:30.090173 1193189 logs.go:282] 0 containers: []
	W1209 04:36:30.090180 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:36:30.090185 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:36:30.090250 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:36:30.118708 1193189 cri.go:89] found id: ""
	I1209 04:36:30.118721 1193189 logs.go:282] 0 containers: []
	W1209 04:36:30.118728 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:36:30.118734 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:36:30.118796 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:36:30.146404 1193189 cri.go:89] found id: ""
	I1209 04:36:30.146417 1193189 logs.go:282] 0 containers: []
	W1209 04:36:30.146424 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:36:30.146429 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:36:30.146488 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:36:30.170089 1193189 cri.go:89] found id: ""
	I1209 04:36:30.170102 1193189 logs.go:282] 0 containers: []
	W1209 04:36:30.170109 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:36:30.170114 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:36:30.170171 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:36:30.194303 1193189 cri.go:89] found id: ""
	I1209 04:36:30.194317 1193189 logs.go:282] 0 containers: []
	W1209 04:36:30.194327 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:36:30.194334 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:36:30.194344 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:36:30.230597 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:36:30.230613 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:36:30.285894 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:36:30.285913 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:36:30.303774 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:36:30.303789 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:36:30.370275 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:36:30.361691   15431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:30.362453   15431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:30.364059   15431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:30.364598   15431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:30.366280   15431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:36:30.361691   15431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:30.362453   15431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:30.364059   15431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:30.364598   15431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:30.366280   15431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:36:30.370284 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:36:30.370297 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:36:32.932454 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:36:32.942712 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:36:32.942772 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:36:32.970393 1193189 cri.go:89] found id: ""
	I1209 04:36:32.970406 1193189 logs.go:282] 0 containers: []
	W1209 04:36:32.970413 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:36:32.970418 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:36:32.970480 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:36:33.001462 1193189 cri.go:89] found id: ""
	I1209 04:36:33.001476 1193189 logs.go:282] 0 containers: []
	W1209 04:36:33.001489 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:36:33.001495 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:36:33.001561 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:36:33.027773 1193189 cri.go:89] found id: ""
	I1209 04:36:33.027787 1193189 logs.go:282] 0 containers: []
	W1209 04:36:33.027794 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:36:33.027799 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:36:33.027858 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:36:33.054066 1193189 cri.go:89] found id: ""
	I1209 04:36:33.054080 1193189 logs.go:282] 0 containers: []
	W1209 04:36:33.054086 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:36:33.054091 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:36:33.054152 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:36:33.077043 1193189 cri.go:89] found id: ""
	I1209 04:36:33.077057 1193189 logs.go:282] 0 containers: []
	W1209 04:36:33.077064 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:36:33.077069 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:36:33.077127 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:36:33.101043 1193189 cri.go:89] found id: ""
	I1209 04:36:33.101056 1193189 logs.go:282] 0 containers: []
	W1209 04:36:33.101063 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:36:33.101068 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:36:33.101126 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:36:33.125074 1193189 cri.go:89] found id: ""
	I1209 04:36:33.125088 1193189 logs.go:282] 0 containers: []
	W1209 04:36:33.125096 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:36:33.125104 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:36:33.125115 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:36:33.181829 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:36:33.181849 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:36:33.198599 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:36:33.198616 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:36:33.259348 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:36:33.250653   15527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:33.251506   15527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:33.253061   15527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:33.253668   15527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:33.255199   15527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:36:33.250653   15527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:33.251506   15527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:33.253061   15527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:33.253668   15527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:33.255199   15527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:36:33.259358 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:36:33.259369 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:36:33.321638 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:36:33.321660 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:36:35.847785 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:36:35.857973 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:36:35.858039 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:36:35.882819 1193189 cri.go:89] found id: ""
	I1209 04:36:35.882832 1193189 logs.go:282] 0 containers: []
	W1209 04:36:35.882839 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:36:35.882844 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:36:35.882908 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:36:35.911762 1193189 cri.go:89] found id: ""
	I1209 04:36:35.911776 1193189 logs.go:282] 0 containers: []
	W1209 04:36:35.911784 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:36:35.911789 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:36:35.911849 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:36:35.946631 1193189 cri.go:89] found id: ""
	I1209 04:36:35.946646 1193189 logs.go:282] 0 containers: []
	W1209 04:36:35.946652 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:36:35.946663 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:36:35.946721 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:36:35.972345 1193189 cri.go:89] found id: ""
	I1209 04:36:35.972360 1193189 logs.go:282] 0 containers: []
	W1209 04:36:35.972367 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:36:35.972372 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:36:35.972438 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:36:36.010844 1193189 cri.go:89] found id: ""
	I1209 04:36:36.010859 1193189 logs.go:282] 0 containers: []
	W1209 04:36:36.010867 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:36:36.010876 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:36:36.010940 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:36:36.036297 1193189 cri.go:89] found id: ""
	I1209 04:36:36.036310 1193189 logs.go:282] 0 containers: []
	W1209 04:36:36.036317 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:36:36.036323 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:36:36.036387 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:36:36.066383 1193189 cri.go:89] found id: ""
	I1209 04:36:36.066398 1193189 logs.go:282] 0 containers: []
	W1209 04:36:36.066404 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:36:36.066412 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:36:36.066422 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:36:36.123320 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:36:36.123340 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:36:36.141674 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:36:36.141691 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:36:36.207738 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:36:36.198534   15631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:36.199238   15631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:36.201129   15631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:36.201829   15631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:36.203559   15631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:36:36.198534   15631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:36.199238   15631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:36.201129   15631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:36.201829   15631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:36.203559   15631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:36:36.207749 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:36:36.207760 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:36:36.271530 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:36:36.271553 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:36:38.808031 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:36:38.818384 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:36:38.818445 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:36:38.842672 1193189 cri.go:89] found id: ""
	I1209 04:36:38.842686 1193189 logs.go:282] 0 containers: []
	W1209 04:36:38.842692 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:36:38.842697 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:36:38.842757 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:36:38.867351 1193189 cri.go:89] found id: ""
	I1209 04:36:38.867365 1193189 logs.go:282] 0 containers: []
	W1209 04:36:38.867371 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:36:38.867376 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:36:38.867436 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:36:38.891443 1193189 cri.go:89] found id: ""
	I1209 04:36:38.891456 1193189 logs.go:282] 0 containers: []
	W1209 04:36:38.891463 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:36:38.891469 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:36:38.891530 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:36:38.916345 1193189 cri.go:89] found id: ""
	I1209 04:36:38.916359 1193189 logs.go:282] 0 containers: []
	W1209 04:36:38.916366 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:36:38.916371 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:36:38.916435 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:36:38.949316 1193189 cri.go:89] found id: ""
	I1209 04:36:38.949330 1193189 logs.go:282] 0 containers: []
	W1209 04:36:38.949348 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:36:38.949354 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:36:38.949427 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:36:38.983440 1193189 cri.go:89] found id: ""
	I1209 04:36:38.983453 1193189 logs.go:282] 0 containers: []
	W1209 04:36:38.983472 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:36:38.983479 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:36:38.983548 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:36:39.016431 1193189 cri.go:89] found id: ""
	I1209 04:36:39.016445 1193189 logs.go:282] 0 containers: []
	W1209 04:36:39.016452 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:36:39.016460 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:36:39.016470 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:36:39.072919 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:36:39.072940 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:36:39.091632 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:36:39.091649 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:36:39.155205 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:36:39.147101   15734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:39.147530   15734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:39.149195   15734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:39.149594   15734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:39.151291   15734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:36:39.147101   15734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:39.147530   15734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:39.149195   15734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:39.149594   15734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:39.151291   15734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:36:39.155215 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:36:39.155237 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:36:39.217334 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:36:39.217354 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:36:41.745095 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:36:41.755765 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:36:41.755830 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:36:41.788789 1193189 cri.go:89] found id: ""
	I1209 04:36:41.788815 1193189 logs.go:282] 0 containers: []
	W1209 04:36:41.788821 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:36:41.788827 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:36:41.788905 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:36:41.818341 1193189 cri.go:89] found id: ""
	I1209 04:36:41.818363 1193189 logs.go:282] 0 containers: []
	W1209 04:36:41.818371 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:36:41.818376 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:36:41.818443 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:36:41.847734 1193189 cri.go:89] found id: ""
	I1209 04:36:41.847748 1193189 logs.go:282] 0 containers: []
	W1209 04:36:41.847754 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:36:41.847768 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:36:41.847827 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:36:41.871920 1193189 cri.go:89] found id: ""
	I1209 04:36:41.871943 1193189 logs.go:282] 0 containers: []
	W1209 04:36:41.871950 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:36:41.871955 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:36:41.872035 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:36:41.897849 1193189 cri.go:89] found id: ""
	I1209 04:36:41.897863 1193189 logs.go:282] 0 containers: []
	W1209 04:36:41.897870 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:36:41.897875 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:36:41.897936 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:36:41.923060 1193189 cri.go:89] found id: ""
	I1209 04:36:41.923083 1193189 logs.go:282] 0 containers: []
	W1209 04:36:41.923090 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:36:41.923096 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:36:41.923163 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:36:41.952660 1193189 cri.go:89] found id: ""
	I1209 04:36:41.952684 1193189 logs.go:282] 0 containers: []
	W1209 04:36:41.952692 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:36:41.952699 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:36:41.952709 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:36:42.023725 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:36:42.023763 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:36:42.042594 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:36:42.042613 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:36:42.123707 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:36:42.110165   15835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:42.110742   15835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:42.112376   15835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:42.113625   15835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:42.114588   15835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:36:42.110165   15835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:42.110742   15835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:42.112376   15835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:42.113625   15835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:42.114588   15835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:36:42.123742 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:36:42.123763 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:36:42.205354 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:36:42.205378 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:36:44.742230 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:36:44.752061 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:36:44.752130 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:36:44.777546 1193189 cri.go:89] found id: ""
	I1209 04:36:44.777560 1193189 logs.go:282] 0 containers: []
	W1209 04:36:44.777567 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:36:44.777573 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:36:44.777640 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:36:44.800656 1193189 cri.go:89] found id: ""
	I1209 04:36:44.800670 1193189 logs.go:282] 0 containers: []
	W1209 04:36:44.800677 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:36:44.800681 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:36:44.800746 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:36:44.823629 1193189 cri.go:89] found id: ""
	I1209 04:36:44.823643 1193189 logs.go:282] 0 containers: []
	W1209 04:36:44.823649 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:36:44.823654 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:36:44.823710 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:36:44.847779 1193189 cri.go:89] found id: ""
	I1209 04:36:44.847792 1193189 logs.go:282] 0 containers: []
	W1209 04:36:44.847799 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:36:44.847804 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:36:44.847864 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:36:44.871420 1193189 cri.go:89] found id: ""
	I1209 04:36:44.871434 1193189 logs.go:282] 0 containers: []
	W1209 04:36:44.871441 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:36:44.871446 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:36:44.871502 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:36:44.897429 1193189 cri.go:89] found id: ""
	I1209 04:36:44.897443 1193189 logs.go:282] 0 containers: []
	W1209 04:36:44.897450 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:36:44.897455 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:36:44.897515 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:36:44.921002 1193189 cri.go:89] found id: ""
	I1209 04:36:44.921016 1193189 logs.go:282] 0 containers: []
	W1209 04:36:44.921023 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:36:44.921030 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:36:44.921050 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:36:44.943906 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:36:44.943923 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:36:45.040267 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:36:45.023556   15936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:45.024395   15936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:45.026904   15936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:45.028346   15936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:45.029304   15936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:36:45.023556   15936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:45.024395   15936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:45.026904   15936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:45.028346   15936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:45.029304   15936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:36:45.040278 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:36:45.040290 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:36:45.111615 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:36:45.111641 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:36:45.154764 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:36:45.154783 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:36:47.737899 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:36:47.748114 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:36:47.748183 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:36:47.772307 1193189 cri.go:89] found id: ""
	I1209 04:36:47.772321 1193189 logs.go:282] 0 containers: []
	W1209 04:36:47.772327 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:36:47.772333 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:36:47.772392 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:36:47.796250 1193189 cri.go:89] found id: ""
	I1209 04:36:47.796264 1193189 logs.go:282] 0 containers: []
	W1209 04:36:47.796271 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:36:47.796276 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:36:47.796337 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:36:47.820196 1193189 cri.go:89] found id: ""
	I1209 04:36:47.820209 1193189 logs.go:282] 0 containers: []
	W1209 04:36:47.820217 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:36:47.820222 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:36:47.820279 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:36:47.844179 1193189 cri.go:89] found id: ""
	I1209 04:36:47.844193 1193189 logs.go:282] 0 containers: []
	W1209 04:36:47.844200 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:36:47.844205 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:36:47.844261 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:36:47.871664 1193189 cri.go:89] found id: ""
	I1209 04:36:47.871678 1193189 logs.go:282] 0 containers: []
	W1209 04:36:47.871685 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:36:47.871689 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:36:47.871746 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:36:47.897882 1193189 cri.go:89] found id: ""
	I1209 04:36:47.897896 1193189 logs.go:282] 0 containers: []
	W1209 04:36:47.897902 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:36:47.897907 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:36:47.897968 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:36:47.925663 1193189 cri.go:89] found id: ""
	I1209 04:36:47.925678 1193189 logs.go:282] 0 containers: []
	W1209 04:36:47.925684 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:36:47.925692 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:36:47.925702 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:36:47.982430 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:36:47.982448 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:36:48.003029 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:36:48.003046 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:36:48.081084 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:36:48.072200   16045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:48.073024   16045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:48.074652   16045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:48.075018   16045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:48.076580   16045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:36:48.072200   16045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:48.073024   16045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:48.074652   16045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:48.075018   16045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:48.076580   16045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:36:48.081095 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:36:48.081114 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:36:48.144865 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:36:48.144883 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:36:50.676655 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:36:50.687887 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:36:50.687948 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:36:50.712477 1193189 cri.go:89] found id: ""
	I1209 04:36:50.712492 1193189 logs.go:282] 0 containers: []
	W1209 04:36:50.712498 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:36:50.712504 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:36:50.712560 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:36:50.743459 1193189 cri.go:89] found id: ""
	I1209 04:36:50.743472 1193189 logs.go:282] 0 containers: []
	W1209 04:36:50.743479 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:36:50.743484 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:36:50.743559 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:36:50.769066 1193189 cri.go:89] found id: ""
	I1209 04:36:50.769080 1193189 logs.go:282] 0 containers: []
	W1209 04:36:50.769087 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:36:50.769093 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:36:50.769149 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:36:50.792910 1193189 cri.go:89] found id: ""
	I1209 04:36:50.792924 1193189 logs.go:282] 0 containers: []
	W1209 04:36:50.792931 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:36:50.792942 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:36:50.793002 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:36:50.817006 1193189 cri.go:89] found id: ""
	I1209 04:36:50.817020 1193189 logs.go:282] 0 containers: []
	W1209 04:36:50.817027 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:36:50.817033 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:36:50.817108 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:36:50.840981 1193189 cri.go:89] found id: ""
	I1209 04:36:50.840995 1193189 logs.go:282] 0 containers: []
	W1209 04:36:50.841002 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:36:50.841007 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:36:50.841065 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:36:50.864484 1193189 cri.go:89] found id: ""
	I1209 04:36:50.864498 1193189 logs.go:282] 0 containers: []
	W1209 04:36:50.864504 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:36:50.864512 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:36:50.864522 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:36:50.934409 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:36:50.923680   16138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:50.924264   16138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:50.925919   16138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:50.926350   16138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:50.927812   16138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:36:50.923680   16138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:50.924264   16138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:50.925919   16138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:50.926350   16138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:50.927812   16138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:36:50.934428 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:36:50.934439 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:36:51.007145 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:36:51.007168 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:36:51.035885 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:36:51.035901 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:36:51.094880 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:36:51.094903 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:36:53.613358 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:36:53.623300 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:36:53.623360 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:36:53.649605 1193189 cri.go:89] found id: ""
	I1209 04:36:53.649619 1193189 logs.go:282] 0 containers: []
	W1209 04:36:53.649625 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:36:53.649630 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:36:53.649688 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:36:53.673756 1193189 cri.go:89] found id: ""
	I1209 04:36:53.673771 1193189 logs.go:282] 0 containers: []
	W1209 04:36:53.673777 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:36:53.673782 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:36:53.673841 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:36:53.697312 1193189 cri.go:89] found id: ""
	I1209 04:36:53.697326 1193189 logs.go:282] 0 containers: []
	W1209 04:36:53.697333 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:36:53.697339 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:36:53.697405 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:36:53.721559 1193189 cri.go:89] found id: ""
	I1209 04:36:53.721573 1193189 logs.go:282] 0 containers: []
	W1209 04:36:53.721580 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:36:53.721585 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:36:53.721643 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:36:53.745640 1193189 cri.go:89] found id: ""
	I1209 04:36:53.745654 1193189 logs.go:282] 0 containers: []
	W1209 04:36:53.745661 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:36:53.745666 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:36:53.745724 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:36:53.770072 1193189 cri.go:89] found id: ""
	I1209 04:36:53.770086 1193189 logs.go:282] 0 containers: []
	W1209 04:36:53.770093 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:36:53.770099 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:36:53.770161 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:36:53.793834 1193189 cri.go:89] found id: ""
	I1209 04:36:53.793848 1193189 logs.go:282] 0 containers: []
	W1209 04:36:53.793856 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:36:53.793864 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:36:53.793873 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:36:53.853273 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:36:53.853293 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:36:53.870522 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:36:53.870539 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:36:53.937367 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:36:53.928497   16249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:53.929009   16249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:53.930701   16249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:53.931304   16249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:53.932870   16249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:36:53.928497   16249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:53.929009   16249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:53.930701   16249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:53.931304   16249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:53.932870   16249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:36:53.937377 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:36:53.937387 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:36:54.005219 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:36:54.005240 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:36:56.538809 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:36:56.548679 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:36:56.548738 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:36:56.572505 1193189 cri.go:89] found id: ""
	I1209 04:36:56.572519 1193189 logs.go:282] 0 containers: []
	W1209 04:36:56.572526 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:36:56.572531 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:36:56.572591 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:36:56.596732 1193189 cri.go:89] found id: ""
	I1209 04:36:56.596746 1193189 logs.go:282] 0 containers: []
	W1209 04:36:56.596753 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:36:56.596758 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:36:56.596817 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:36:56.622042 1193189 cri.go:89] found id: ""
	I1209 04:36:56.622056 1193189 logs.go:282] 0 containers: []
	W1209 04:36:56.622063 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:36:56.622068 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:36:56.622125 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:36:56.644865 1193189 cri.go:89] found id: ""
	I1209 04:36:56.644879 1193189 logs.go:282] 0 containers: []
	W1209 04:36:56.644885 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:36:56.644890 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:36:56.644947 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:36:56.670230 1193189 cri.go:89] found id: ""
	I1209 04:36:56.670244 1193189 logs.go:282] 0 containers: []
	W1209 04:36:56.670252 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:36:56.670257 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:36:56.670314 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:36:56.697566 1193189 cri.go:89] found id: ""
	I1209 04:36:56.697580 1193189 logs.go:282] 0 containers: []
	W1209 04:36:56.697586 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:36:56.697592 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:36:56.697650 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:36:56.726250 1193189 cri.go:89] found id: ""
	I1209 04:36:56.726264 1193189 logs.go:282] 0 containers: []
	W1209 04:36:56.726270 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:36:56.726278 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:36:56.726287 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:36:56.789536 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:36:56.789556 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:36:56.818317 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:36:56.818332 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:36:56.874653 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:36:56.874671 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:36:56.892967 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:36:56.892987 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:36:56.969870 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:36:56.961196   16367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:56.962227   16367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:56.964000   16367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:56.964364   16367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:56.965851   16367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:36:56.961196   16367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:56.962227   16367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:56.964000   16367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:56.964364   16367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:56.965851   16367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:36:59.470133 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:36:59.480193 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:36:59.480253 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:36:59.505288 1193189 cri.go:89] found id: ""
	I1209 04:36:59.505301 1193189 logs.go:282] 0 containers: []
	W1209 04:36:59.505308 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:36:59.505314 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:36:59.505375 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:36:59.530093 1193189 cri.go:89] found id: ""
	I1209 04:36:59.530108 1193189 logs.go:282] 0 containers: []
	W1209 04:36:59.530114 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:36:59.530120 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:36:59.530180 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:36:59.558857 1193189 cri.go:89] found id: ""
	I1209 04:36:59.558870 1193189 logs.go:282] 0 containers: []
	W1209 04:36:59.558877 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:36:59.558882 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:36:59.558939 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:36:59.587253 1193189 cri.go:89] found id: ""
	I1209 04:36:59.587267 1193189 logs.go:282] 0 containers: []
	W1209 04:36:59.587273 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:36:59.587278 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:36:59.587334 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:36:59.615574 1193189 cri.go:89] found id: ""
	I1209 04:36:59.615587 1193189 logs.go:282] 0 containers: []
	W1209 04:36:59.615594 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:36:59.615599 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:36:59.615661 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:36:59.640949 1193189 cri.go:89] found id: ""
	I1209 04:36:59.640963 1193189 logs.go:282] 0 containers: []
	W1209 04:36:59.640969 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:36:59.640975 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:36:59.641036 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:36:59.669059 1193189 cri.go:89] found id: ""
	I1209 04:36:59.669073 1193189 logs.go:282] 0 containers: []
	W1209 04:36:59.669079 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:36:59.669087 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:36:59.669099 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:36:59.728975 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:36:59.728993 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:36:59.746224 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:36:59.746240 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:36:59.811892 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:36:59.803565   16459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:59.804329   16459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:59.805884   16459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:59.806435   16459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:59.808154   16459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:36:59.803565   16459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:59.804329   16459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:59.805884   16459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:59.806435   16459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:59.808154   16459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:36:59.811908 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:36:59.811919 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:36:59.874287 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:36:59.874310 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:37:02.402643 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:37:02.413719 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:37:02.413785 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:37:02.440871 1193189 cri.go:89] found id: ""
	I1209 04:37:02.440885 1193189 logs.go:282] 0 containers: []
	W1209 04:37:02.440892 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:37:02.440897 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:37:02.440962 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:37:02.466112 1193189 cri.go:89] found id: ""
	I1209 04:37:02.466125 1193189 logs.go:282] 0 containers: []
	W1209 04:37:02.466132 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:37:02.466137 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:37:02.466195 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:37:02.491412 1193189 cri.go:89] found id: ""
	I1209 04:37:02.491426 1193189 logs.go:282] 0 containers: []
	W1209 04:37:02.491433 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:37:02.491438 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:37:02.491495 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:37:02.519036 1193189 cri.go:89] found id: ""
	I1209 04:37:02.519051 1193189 logs.go:282] 0 containers: []
	W1209 04:37:02.519058 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:37:02.519063 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:37:02.519126 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:37:02.547912 1193189 cri.go:89] found id: ""
	I1209 04:37:02.547927 1193189 logs.go:282] 0 containers: []
	W1209 04:37:02.547934 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:37:02.547939 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:37:02.548000 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:37:02.574804 1193189 cri.go:89] found id: ""
	I1209 04:37:02.574818 1193189 logs.go:282] 0 containers: []
	W1209 04:37:02.574826 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:37:02.574832 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:37:02.574910 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:37:02.598953 1193189 cri.go:89] found id: ""
	I1209 04:37:02.598967 1193189 logs.go:282] 0 containers: []
	W1209 04:37:02.598973 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:37:02.598981 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:37:02.598994 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:37:02.661273 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:37:02.661293 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:37:02.692376 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:37:02.692392 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:37:02.750097 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:37:02.750116 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:37:02.768673 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:37:02.768691 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:37:02.831464 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:37:02.822705   16576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:02.823490   16576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:02.825015   16576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:02.825561   16576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:02.827104   16576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:37:02.822705   16576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:02.823490   16576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:02.825015   16576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:02.825561   16576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:02.827104   16576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:37:05.331744 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:37:05.341534 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:37:05.341596 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:37:05.366255 1193189 cri.go:89] found id: ""
	I1209 04:37:05.366268 1193189 logs.go:282] 0 containers: []
	W1209 04:37:05.366275 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:37:05.366280 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:37:05.366339 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:37:05.391184 1193189 cri.go:89] found id: ""
	I1209 04:37:05.391198 1193189 logs.go:282] 0 containers: []
	W1209 04:37:05.391204 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:37:05.391211 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:37:05.391273 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:37:05.418240 1193189 cri.go:89] found id: ""
	I1209 04:37:05.418253 1193189 logs.go:282] 0 containers: []
	W1209 04:37:05.418259 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:37:05.418264 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:37:05.418327 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:37:05.442720 1193189 cri.go:89] found id: ""
	I1209 04:37:05.442734 1193189 logs.go:282] 0 containers: []
	W1209 04:37:05.442740 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:37:05.442746 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:37:05.442809 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:37:05.467915 1193189 cri.go:89] found id: ""
	I1209 04:37:05.467930 1193189 logs.go:282] 0 containers: []
	W1209 04:37:05.467937 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:37:05.467942 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:37:05.468009 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:37:05.491304 1193189 cri.go:89] found id: ""
	I1209 04:37:05.491318 1193189 logs.go:282] 0 containers: []
	W1209 04:37:05.491325 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:37:05.491330 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:37:05.491388 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:37:05.520597 1193189 cri.go:89] found id: ""
	I1209 04:37:05.520616 1193189 logs.go:282] 0 containers: []
	W1209 04:37:05.520623 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:37:05.520631 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:37:05.520642 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:37:05.577158 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:37:05.577177 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:37:05.593604 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:37:05.593620 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:37:05.661751 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:37:05.653767   16667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:05.654429   16667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:05.656081   16667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:05.656695   16667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:05.658088   16667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:37:05.653767   16667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:05.654429   16667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:05.656081   16667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:05.656695   16667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:05.658088   16667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:37:05.661761 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:37:05.661771 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:37:05.729846 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:37:05.729866 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:37:08.257598 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:37:08.267457 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:37:08.267520 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:37:08.295093 1193189 cri.go:89] found id: ""
	I1209 04:37:08.295107 1193189 logs.go:282] 0 containers: []
	W1209 04:37:08.295114 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:37:08.295119 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:37:08.295181 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:37:08.320140 1193189 cri.go:89] found id: ""
	I1209 04:37:08.320153 1193189 logs.go:282] 0 containers: []
	W1209 04:37:08.320160 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:37:08.320165 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:37:08.320233 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:37:08.344055 1193189 cri.go:89] found id: ""
	I1209 04:37:08.344069 1193189 logs.go:282] 0 containers: []
	W1209 04:37:08.344075 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:37:08.344081 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:37:08.344141 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:37:08.372791 1193189 cri.go:89] found id: ""
	I1209 04:37:08.372805 1193189 logs.go:282] 0 containers: []
	W1209 04:37:08.372811 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:37:08.372816 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:37:08.372874 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:37:08.396162 1193189 cri.go:89] found id: ""
	I1209 04:37:08.396175 1193189 logs.go:282] 0 containers: []
	W1209 04:37:08.396182 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:37:08.396187 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:37:08.396245 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:37:08.420733 1193189 cri.go:89] found id: ""
	I1209 04:37:08.420747 1193189 logs.go:282] 0 containers: []
	W1209 04:37:08.420755 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:37:08.420769 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:37:08.420830 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:37:08.444879 1193189 cri.go:89] found id: ""
	I1209 04:37:08.444894 1193189 logs.go:282] 0 containers: []
	W1209 04:37:08.444900 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:37:08.444918 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:37:08.444929 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:37:08.508132 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:37:08.499420   16764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:08.499882   16764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:08.501619   16764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:08.502150   16764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:08.503673   16764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:37:08.499420   16764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:08.499882   16764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:08.501619   16764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:08.502150   16764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:08.503673   16764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:37:08.508143 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:37:08.508156 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:37:08.570875 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:37:08.570900 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:37:08.602018 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:37:08.602034 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:37:08.663156 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:37:08.663174 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:37:11.180415 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:37:11.191088 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:37:11.191148 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:37:11.218679 1193189 cri.go:89] found id: ""
	I1209 04:37:11.218696 1193189 logs.go:282] 0 containers: []
	W1209 04:37:11.218703 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:37:11.218708 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:37:11.218766 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:37:11.253810 1193189 cri.go:89] found id: ""
	I1209 04:37:11.253842 1193189 logs.go:282] 0 containers: []
	W1209 04:37:11.253849 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:37:11.253855 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:37:11.253925 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:37:11.279585 1193189 cri.go:89] found id: ""
	I1209 04:37:11.279599 1193189 logs.go:282] 0 containers: []
	W1209 04:37:11.279605 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:37:11.279610 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:37:11.279668 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:37:11.303733 1193189 cri.go:89] found id: ""
	I1209 04:37:11.303747 1193189 logs.go:282] 0 containers: []
	W1209 04:37:11.303754 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:37:11.303759 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:37:11.303818 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:37:11.328678 1193189 cri.go:89] found id: ""
	I1209 04:37:11.328692 1193189 logs.go:282] 0 containers: []
	W1209 04:37:11.328699 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:37:11.328710 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:37:11.328768 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:37:11.352807 1193189 cri.go:89] found id: ""
	I1209 04:37:11.352830 1193189 logs.go:282] 0 containers: []
	W1209 04:37:11.352838 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:37:11.352843 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:37:11.352904 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:37:11.380926 1193189 cri.go:89] found id: ""
	I1209 04:37:11.380940 1193189 logs.go:282] 0 containers: []
	W1209 04:37:11.380946 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:37:11.380954 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:37:11.380964 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:37:11.443730 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:37:11.443751 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:37:11.471147 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:37:11.471163 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:37:11.528045 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:37:11.528068 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:37:11.545822 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:37:11.545839 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:37:11.612652 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:37:11.604231   16889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:11.604891   16889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:11.606570   16889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:11.607169   16889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:11.608878   16889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:37:11.604231   16889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:11.604891   16889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:11.606570   16889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:11.607169   16889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:11.608878   16889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:37:14.112937 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:37:14.123734 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:37:14.123791 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:37:14.149868 1193189 cri.go:89] found id: ""
	I1209 04:37:14.149884 1193189 logs.go:282] 0 containers: []
	W1209 04:37:14.149891 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:37:14.149897 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:37:14.149957 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:37:14.175575 1193189 cri.go:89] found id: ""
	I1209 04:37:14.175589 1193189 logs.go:282] 0 containers: []
	W1209 04:37:14.175595 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:37:14.175601 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:37:14.175665 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:37:14.202589 1193189 cri.go:89] found id: ""
	I1209 04:37:14.202615 1193189 logs.go:282] 0 containers: []
	W1209 04:37:14.202621 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:37:14.202627 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:37:14.202707 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:37:14.229085 1193189 cri.go:89] found id: ""
	I1209 04:37:14.229099 1193189 logs.go:282] 0 containers: []
	W1209 04:37:14.229109 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:37:14.229117 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:37:14.229183 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:37:14.254508 1193189 cri.go:89] found id: ""
	I1209 04:37:14.254522 1193189 logs.go:282] 0 containers: []
	W1209 04:37:14.254529 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:37:14.254534 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:37:14.254626 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:37:14.282967 1193189 cri.go:89] found id: ""
	I1209 04:37:14.282990 1193189 logs.go:282] 0 containers: []
	W1209 04:37:14.282997 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:37:14.283003 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:37:14.283072 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:37:14.307959 1193189 cri.go:89] found id: ""
	I1209 04:37:14.307973 1193189 logs.go:282] 0 containers: []
	W1209 04:37:14.307980 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:37:14.307988 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:37:14.307998 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:37:14.337297 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:37:14.337312 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:37:14.393504 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:37:14.393523 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:37:14.411720 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:37:14.411736 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:37:14.476754 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:37:14.469112   16989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:14.469506   16989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:14.470955   16989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:14.471259   16989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:14.472758   16989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:37:14.469112   16989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:14.469506   16989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:14.470955   16989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:14.471259   16989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:14.472758   16989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:37:14.476764 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:37:14.476775 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:37:17.039773 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:37:17.050019 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:37:17.050078 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:37:17.074811 1193189 cri.go:89] found id: ""
	I1209 04:37:17.074825 1193189 logs.go:282] 0 containers: []
	W1209 04:37:17.074841 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:37:17.074847 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:37:17.074928 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:37:17.098749 1193189 cri.go:89] found id: ""
	I1209 04:37:17.098763 1193189 logs.go:282] 0 containers: []
	W1209 04:37:17.098779 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:37:17.098784 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:37:17.098851 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:37:17.123314 1193189 cri.go:89] found id: ""
	I1209 04:37:17.123328 1193189 logs.go:282] 0 containers: []
	W1209 04:37:17.123334 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:37:17.123348 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:37:17.123404 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:37:17.148281 1193189 cri.go:89] found id: ""
	I1209 04:37:17.148304 1193189 logs.go:282] 0 containers: []
	W1209 04:37:17.148314 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:37:17.148319 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:37:17.148386 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:37:17.178459 1193189 cri.go:89] found id: ""
	I1209 04:37:17.178473 1193189 logs.go:282] 0 containers: []
	W1209 04:37:17.178480 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:37:17.178487 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:37:17.178545 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:37:17.214370 1193189 cri.go:89] found id: ""
	I1209 04:37:17.214383 1193189 logs.go:282] 0 containers: []
	W1209 04:37:17.214390 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:37:17.214395 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:37:17.214455 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:37:17.241547 1193189 cri.go:89] found id: ""
	I1209 04:37:17.241560 1193189 logs.go:282] 0 containers: []
	W1209 04:37:17.241567 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:37:17.241574 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:37:17.241584 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:37:17.300902 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:37:17.300920 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:37:17.318244 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:37:17.318260 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:37:17.379838 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:37:17.371574   17083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:17.372258   17083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:17.373943   17083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:17.374513   17083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:17.376103   17083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:37:17.371574   17083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:17.372258   17083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:17.373943   17083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:17.374513   17083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:17.376103   17083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:37:17.379865 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:37:17.379875 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:37:17.442204 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:37:17.442227 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:37:19.972933 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:37:19.982835 1193189 kubeadm.go:602] duration metric: took 4m3.833613801s to restartPrimaryControlPlane
	W1209 04:37:19.982896 1193189 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1209 04:37:19.982967 1193189 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1209 04:37:20.394224 1193189 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 04:37:20.407222 1193189 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 04:37:20.415043 1193189 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1209 04:37:20.415096 1193189 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 04:37:20.422447 1193189 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 04:37:20.422458 1193189 kubeadm.go:158] found existing configuration files:
	
	I1209 04:37:20.422511 1193189 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1209 04:37:20.429958 1193189 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 04:37:20.430020 1193189 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 04:37:20.437087 1193189 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1209 04:37:20.444177 1193189 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 04:37:20.444229 1193189 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 04:37:20.451583 1193189 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1209 04:37:20.459107 1193189 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 04:37:20.459158 1193189 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 04:37:20.466013 1193189 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1209 04:37:20.473265 1193189 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 04:37:20.473320 1193189 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 04:37:20.480362 1193189 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1209 04:37:20.591599 1193189 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1209 04:37:20.592032 1193189 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1209 04:37:20.651935 1193189 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1209 04:41:22.764150 1193189 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1209 04:41:22.764175 1193189 kubeadm.go:319] 
	I1209 04:41:22.764241 1193189 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1209 04:41:22.768309 1193189 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1209 04:41:22.768359 1193189 kubeadm.go:319] [preflight] Running pre-flight checks
	I1209 04:41:22.768442 1193189 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1209 04:41:22.768497 1193189 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1209 04:41:22.768531 1193189 kubeadm.go:319] OS: Linux
	I1209 04:41:22.768594 1193189 kubeadm.go:319] CGROUPS_CPU: enabled
	I1209 04:41:22.768653 1193189 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1209 04:41:22.768699 1193189 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1209 04:41:22.768746 1193189 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1209 04:41:22.768792 1193189 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1209 04:41:22.768840 1193189 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1209 04:41:22.768883 1193189 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1209 04:41:22.768930 1193189 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1209 04:41:22.768975 1193189 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1209 04:41:22.769046 1193189 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1209 04:41:22.769140 1193189 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1209 04:41:22.769229 1193189 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1209 04:41:22.769290 1193189 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1209 04:41:22.772269 1193189 out.go:252]   - Generating certificates and keys ...
	I1209 04:41:22.772365 1193189 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1209 04:41:22.772442 1193189 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1209 04:41:22.772517 1193189 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1209 04:41:22.772582 1193189 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1209 04:41:22.772651 1193189 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1209 04:41:22.772740 1193189 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1209 04:41:22.772808 1193189 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1209 04:41:22.772883 1193189 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1209 04:41:22.772975 1193189 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1209 04:41:22.773069 1193189 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1209 04:41:22.773105 1193189 kubeadm.go:319] [certs] Using the existing "sa" key
	I1209 04:41:22.773160 1193189 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1209 04:41:22.773215 1193189 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1209 04:41:22.773279 1193189 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1209 04:41:22.773333 1193189 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1209 04:41:22.773401 1193189 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1209 04:41:22.773459 1193189 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1209 04:41:22.773544 1193189 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1209 04:41:22.773604 1193189 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1209 04:41:22.778452 1193189 out.go:252]   - Booting up control plane ...
	I1209 04:41:22.778558 1193189 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1209 04:41:22.778636 1193189 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1209 04:41:22.778708 1193189 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1209 04:41:22.778830 1193189 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1209 04:41:22.778931 1193189 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1209 04:41:22.779034 1193189 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1209 04:41:22.779165 1193189 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1209 04:41:22.779213 1193189 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1209 04:41:22.779347 1193189 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1209 04:41:22.779447 1193189 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1209 04:41:22.779507 1193189 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001187798s
	I1209 04:41:22.779509 1193189 kubeadm.go:319] 
	I1209 04:41:22.779562 1193189 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1209 04:41:22.779605 1193189 kubeadm.go:319] 	- The kubelet is not running
	I1209 04:41:22.779728 1193189 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1209 04:41:22.779731 1193189 kubeadm.go:319] 
	I1209 04:41:22.779842 1193189 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1209 04:41:22.779891 1193189 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1209 04:41:22.779919 1193189 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1209 04:41:22.779932 1193189 kubeadm.go:319] 
	W1209 04:41:22.780053 1193189 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001187798s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1209 04:41:22.780164 1193189 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1209 04:41:23.192047 1193189 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 04:41:23.205020 1193189 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1209 04:41:23.205076 1193189 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 04:41:23.212555 1193189 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 04:41:23.212563 1193189 kubeadm.go:158] found existing configuration files:
	
	I1209 04:41:23.212616 1193189 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1209 04:41:23.220135 1193189 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 04:41:23.220190 1193189 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 04:41:23.227342 1193189 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1209 04:41:23.234934 1193189 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 04:41:23.234988 1193189 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 04:41:23.242413 1193189 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1209 04:41:23.249859 1193189 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 04:41:23.249916 1193189 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 04:41:23.257497 1193189 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1209 04:41:23.264938 1193189 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 04:41:23.264993 1193189 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 04:41:23.272287 1193189 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1209 04:41:23.315971 1193189 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1209 04:41:23.316329 1193189 kubeadm.go:319] [preflight] Running pre-flight checks
	I1209 04:41:23.386479 1193189 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1209 04:41:23.386543 1193189 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1209 04:41:23.386577 1193189 kubeadm.go:319] OS: Linux
	I1209 04:41:23.386622 1193189 kubeadm.go:319] CGROUPS_CPU: enabled
	I1209 04:41:23.386669 1193189 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1209 04:41:23.386716 1193189 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1209 04:41:23.386763 1193189 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1209 04:41:23.386810 1193189 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1209 04:41:23.386857 1193189 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1209 04:41:23.386901 1193189 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1209 04:41:23.386948 1193189 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1209 04:41:23.386993 1193189 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1209 04:41:23.459528 1193189 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1209 04:41:23.459630 1193189 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1209 04:41:23.459719 1193189 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1209 04:41:23.465017 1193189 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1209 04:41:23.470401 1193189 out.go:252]   - Generating certificates and keys ...
	I1209 04:41:23.470490 1193189 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1209 04:41:23.470556 1193189 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1209 04:41:23.470655 1193189 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1209 04:41:23.470730 1193189 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1209 04:41:23.470799 1193189 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1209 04:41:23.470852 1193189 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1209 04:41:23.470919 1193189 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1209 04:41:23.470980 1193189 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1209 04:41:23.471052 1193189 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1209 04:41:23.471123 1193189 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1209 04:41:23.471160 1193189 kubeadm.go:319] [certs] Using the existing "sa" key
	I1209 04:41:23.471222 1193189 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1209 04:41:23.897547 1193189 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1209 04:41:24.071180 1193189 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1209 04:41:24.419266 1193189 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1209 04:41:24.580042 1193189 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1209 04:41:25.012112 1193189 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1209 04:41:25.012658 1193189 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1209 04:41:25.015310 1193189 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1209 04:41:25.018776 1193189 out.go:252]   - Booting up control plane ...
	I1209 04:41:25.018875 1193189 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1209 04:41:25.018952 1193189 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1209 04:41:25.019019 1193189 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1209 04:41:25.039820 1193189 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1209 04:41:25.039928 1193189 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1209 04:41:25.047252 1193189 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1209 04:41:25.047955 1193189 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1209 04:41:25.048349 1193189 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1209 04:41:25.184171 1193189 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1209 04:41:25.184286 1193189 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1209 04:45:25.184394 1193189 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000314916s
	I1209 04:45:25.184418 1193189 kubeadm.go:319] 
	I1209 04:45:25.184509 1193189 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1209 04:45:25.184553 1193189 kubeadm.go:319] 	- The kubelet is not running
	I1209 04:45:25.184657 1193189 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1209 04:45:25.184661 1193189 kubeadm.go:319] 
	I1209 04:45:25.184765 1193189 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1209 04:45:25.184796 1193189 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1209 04:45:25.184826 1193189 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1209 04:45:25.184829 1193189 kubeadm.go:319] 
	I1209 04:45:25.188658 1193189 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1209 04:45:25.189080 1193189 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1209 04:45:25.189188 1193189 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1209 04:45:25.189440 1193189 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1209 04:45:25.189444 1193189 kubeadm.go:319] 
	I1209 04:45:25.189512 1193189 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1209 04:45:25.189563 1193189 kubeadm.go:403] duration metric: took 12m9.073031305s to StartCluster
	I1209 04:45:25.189594 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:45:25.189654 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:45:25.214653 1193189 cri.go:89] found id: ""
	I1209 04:45:25.214667 1193189 logs.go:282] 0 containers: []
	W1209 04:45:25.214674 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:45:25.214680 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:45:25.214745 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:45:25.239781 1193189 cri.go:89] found id: ""
	I1209 04:45:25.239795 1193189 logs.go:282] 0 containers: []
	W1209 04:45:25.239802 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:45:25.239806 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:45:25.239865 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:45:25.263923 1193189 cri.go:89] found id: ""
	I1209 04:45:25.263937 1193189 logs.go:282] 0 containers: []
	W1209 04:45:25.263943 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:45:25.263949 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:45:25.264009 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:45:25.289497 1193189 cri.go:89] found id: ""
	I1209 04:45:25.289510 1193189 logs.go:282] 0 containers: []
	W1209 04:45:25.289521 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:45:25.289527 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:45:25.289587 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:45:25.314477 1193189 cri.go:89] found id: ""
	I1209 04:45:25.314491 1193189 logs.go:282] 0 containers: []
	W1209 04:45:25.314497 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:45:25.314502 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:45:25.314564 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:45:25.343027 1193189 cri.go:89] found id: ""
	I1209 04:45:25.343041 1193189 logs.go:282] 0 containers: []
	W1209 04:45:25.343048 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:45:25.343054 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:45:25.343116 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:45:25.372137 1193189 cri.go:89] found id: ""
	I1209 04:45:25.372151 1193189 logs.go:282] 0 containers: []
	W1209 04:45:25.372158 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:45:25.372166 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:45:25.372175 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:45:25.430985 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:45:25.431004 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:45:25.448709 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:45:25.448726 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:45:25.515693 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:45:25.506884   20840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:25.507687   20840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:25.509338   20840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:25.509652   20840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:25.511142   20840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:45:25.506884   20840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:25.507687   20840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:25.509338   20840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:25.509652   20840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:25.511142   20840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:45:25.515704 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:45:25.515716 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:45:25.578666 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:45:25.578686 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1209 04:45:25.609638 1193189 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000314916s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1209 04:45:25.609683 1193189 out.go:285] * 
	W1209 04:45:25.609743 1193189 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000314916s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1209 04:45:25.609756 1193189 out.go:285] * 
	W1209 04:45:25.611848 1193189 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 04:45:25.617063 1193189 out.go:203] 
	W1209 04:45:25.620790 1193189 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000314916s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1209 04:45:25.620840 1193189 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1209 04:45:25.620858 1193189 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1209 04:45:25.624102 1193189 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 09 04:33:14 functional-667319 containerd[9667]: time="2025-12-09T04:33:14.466187116Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 09 04:33:14 functional-667319 containerd[9667]: time="2025-12-09T04:33:14.466260623Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 09 04:33:14 functional-667319 containerd[9667]: time="2025-12-09T04:33:14.466354134Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 09 04:33:14 functional-667319 containerd[9667]: time="2025-12-09T04:33:14.466435125Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 09 04:33:14 functional-667319 containerd[9667]: time="2025-12-09T04:33:14.466499213Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 09 04:33:14 functional-667319 containerd[9667]: time="2025-12-09T04:33:14.466598591Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 09 04:33:14 functional-667319 containerd[9667]: time="2025-12-09T04:33:14.466657461Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 09 04:33:14 functional-667319 containerd[9667]: time="2025-12-09T04:33:14.466714427Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 09 04:33:14 functional-667319 containerd[9667]: time="2025-12-09T04:33:14.466779869Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 09 04:33:14 functional-667319 containerd[9667]: time="2025-12-09T04:33:14.466859104Z" level=info msg="Connect containerd service"
	Dec 09 04:33:14 functional-667319 containerd[9667]: time="2025-12-09T04:33:14.467196932Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 09 04:33:14 functional-667319 containerd[9667]: time="2025-12-09T04:33:14.467824983Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 09 04:33:14 functional-667319 containerd[9667]: time="2025-12-09T04:33:14.483453604Z" level=info msg="Start subscribing containerd event"
	Dec 09 04:33:14 functional-667319 containerd[9667]: time="2025-12-09T04:33:14.484090811Z" level=info msg="Start recovering state"
	Dec 09 04:33:14 functional-667319 containerd[9667]: time="2025-12-09T04:33:14.483855889Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 09 04:33:14 functional-667319 containerd[9667]: time="2025-12-09T04:33:14.486351191Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 09 04:33:14 functional-667319 containerd[9667]: time="2025-12-09T04:33:14.525478311Z" level=info msg="Start event monitor"
	Dec 09 04:33:14 functional-667319 containerd[9667]: time="2025-12-09T04:33:14.525531922Z" level=info msg="Start cni network conf syncer for default"
	Dec 09 04:33:14 functional-667319 containerd[9667]: time="2025-12-09T04:33:14.525542924Z" level=info msg="Start streaming server"
	Dec 09 04:33:14 functional-667319 containerd[9667]: time="2025-12-09T04:33:14.525552738Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 09 04:33:14 functional-667319 containerd[9667]: time="2025-12-09T04:33:14.525560959Z" level=info msg="runtime interface starting up..."
	Dec 09 04:33:14 functional-667319 containerd[9667]: time="2025-12-09T04:33:14.525567629Z" level=info msg="starting plugins..."
	Dec 09 04:33:14 functional-667319 containerd[9667]: time="2025-12-09T04:33:14.525580897Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 09 04:33:14 functional-667319 containerd[9667]: time="2025-12-09T04:33:14.525716006Z" level=info msg="containerd successfully booted in 0.083289s"
	Dec 09 04:33:14 functional-667319 systemd[1]: Started containerd.service - containerd container runtime.
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:45:26.830650   20957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:26.831073   20957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:26.836492   20957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:26.837151   20957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:26.838803   20957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 9 03:13] overlayfs: idmapped layers are currently not supported
	[ +25.904254] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:14] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:16] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:18] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:19] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:30] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:32] overlayfs: idmapped layers are currently not supported
	[ +28.114653] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:33] overlayfs: idmapped layers are currently not supported
	[ +23.720849] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:34] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:35] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:36] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:37] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:38] overlayfs: idmapped layers are currently not supported
	[ +23.656275] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:39] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:57] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:58] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:00] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:02] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:03] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:05] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:06] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 04:45:26 up  7:27,  0 user,  load average: 0.15, 0.19, 0.47
	Linux functional-667319 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 09 04:45:23 functional-667319 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 09 04:45:23 functional-667319 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 318.
	Dec 09 04:45:23 functional-667319 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:45:23 functional-667319 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:45:23 functional-667319 kubelet[20762]: E1209 04:45:23.978809   20762 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 09 04:45:23 functional-667319 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 09 04:45:23 functional-667319 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 09 04:45:24 functional-667319 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 319.
	Dec 09 04:45:24 functional-667319 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:45:24 functional-667319 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:45:24 functional-667319 kubelet[20767]: E1209 04:45:24.727461   20767 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 09 04:45:24 functional-667319 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 09 04:45:24 functional-667319 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 09 04:45:25 functional-667319 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 09 04:45:25 functional-667319 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:45:25 functional-667319 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:45:25 functional-667319 kubelet[20829]: E1209 04:45:25.505174   20829 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 09 04:45:25 functional-667319 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 09 04:45:25 functional-667319 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 09 04:45:26 functional-667319 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 09 04:45:26 functional-667319 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:45:26 functional-667319 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:45:26 functional-667319 kubelet[20873]: E1209 04:45:26.247162   20873 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 09 04:45:26 functional-667319 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 09 04:45:26 functional-667319 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-667319 -n functional-667319
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-667319 -n functional-667319: exit status 2 (368.921224ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-667319" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (736.43s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (2.08s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-667319 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: (dbg) Non-zero exit: kubectl --context functional-667319 get po -l tier=control-plane -n kube-system -o=json: exit status 1 (55.383313ms)

                                                
                                                
-- stdout --
	{
	    "apiVersion": "v1",
	    "items": [],
	    "kind": "List",
	    "metadata": {
	        "resourceVersion": ""
	    }
	}

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:827: failed to get components. args "kubectl --context functional-667319 get po -l tier=control-plane -n kube-system -o=json": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-667319
helpers_test.go:243: (dbg) docker inspect functional-667319:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e5b6511799c8d5c445a335a3bd5cc9a61b518fc27ac93dad8800da366ef32129",
	        "Created": "2025-12-09T04:18:34.060957311Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1182075,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-09T04:18:34.126944158Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:e4eb91ed18a24161fce60c7cdd660144ecd5b8c5029dc2dea2c5e423c2f48ce4",
	        "ResolvConfPath": "/var/lib/docker/containers/e5b6511799c8d5c445a335a3bd5cc9a61b518fc27ac93dad8800da366ef32129/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e5b6511799c8d5c445a335a3bd5cc9a61b518fc27ac93dad8800da366ef32129/hostname",
	        "HostsPath": "/var/lib/docker/containers/e5b6511799c8d5c445a335a3bd5cc9a61b518fc27ac93dad8800da366ef32129/hosts",
	        "LogPath": "/var/lib/docker/containers/e5b6511799c8d5c445a335a3bd5cc9a61b518fc27ac93dad8800da366ef32129/e5b6511799c8d5c445a335a3bd5cc9a61b518fc27ac93dad8800da366ef32129-json.log",
	        "Name": "/functional-667319",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-667319:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-667319",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e5b6511799c8d5c445a335a3bd5cc9a61b518fc27ac93dad8800da366ef32129",
	                "LowerDir": "/var/lib/docker/overlay2/b0239006282b6e4609a1f554d0a3fb94c749a13505795c8e4078cb2db194e8e0-init/diff:/var/lib/docker/overlay2/c44bb57aa59cc265266f37f2bb6e7ec0e7d641c3b4aeaa57e6d23deec6f0d1d4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b0239006282b6e4609a1f554d0a3fb94c749a13505795c8e4078cb2db194e8e0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b0239006282b6e4609a1f554d0a3fb94c749a13505795c8e4078cb2db194e8e0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b0239006282b6e4609a1f554d0a3fb94c749a13505795c8e4078cb2db194e8e0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-667319",
	                "Source": "/var/lib/docker/volumes/functional-667319/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-667319",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-667319",
	                "name.minikube.sigs.k8s.io": "functional-667319",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7c81dabcd9e57af9bce0bc0f5619f6ef3a27af43f4b649283a5bd778ab256415",
	            "SandboxKey": "/var/run/docker/netns/7c81dabcd9e5",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33900"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33901"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33904"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33902"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33903"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-667319": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "fe:40:bd:46:56:d8",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "88b3a65de70c15005c532a44219284d4df94e474ca5b78b04514c2f932b03beb",
	                    "EndpointID": "bdef7b156f4a28c1f641ae70b42db2750bb810ae6fe93fd65325e62eb232fe91",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-667319",
	                        "e5b6511799c8"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-667319 -n functional-667319
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-667319 -n functional-667319: exit status 2 (305.660356ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-667319 logs -n 25
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                          ARGS                                                                           │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-717497 image ls --format yaml --alsologtostderr                                                                                              │ functional-717497 │ jenkins │ v1.37.0 │ 09 Dec 25 04:18 UTC │ 09 Dec 25 04:18 UTC │
	│ ssh     │ functional-717497 ssh pgrep buildkitd                                                                                                                   │ functional-717497 │ jenkins │ v1.37.0 │ 09 Dec 25 04:18 UTC │                     │
	│ image   │ functional-717497 image ls --format json --alsologtostderr                                                                                              │ functional-717497 │ jenkins │ v1.37.0 │ 09 Dec 25 04:18 UTC │ 09 Dec 25 04:18 UTC │
	│ image   │ functional-717497 image build -t localhost/my-image:functional-717497 testdata/build --alsologtostderr                                                  │ functional-717497 │ jenkins │ v1.37.0 │ 09 Dec 25 04:18 UTC │ 09 Dec 25 04:18 UTC │
	│ image   │ functional-717497 image ls --format table --alsologtostderr                                                                                             │ functional-717497 │ jenkins │ v1.37.0 │ 09 Dec 25 04:18 UTC │ 09 Dec 25 04:18 UTC │
	│ image   │ functional-717497 image ls                                                                                                                              │ functional-717497 │ jenkins │ v1.37.0 │ 09 Dec 25 04:18 UTC │ 09 Dec 25 04:18 UTC │
	│ delete  │ -p functional-717497                                                                                                                                    │ functional-717497 │ jenkins │ v1.37.0 │ 09 Dec 25 04:18 UTC │ 09 Dec 25 04:18 UTC │
	│ start   │ -p functional-667319 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:18 UTC │                     │
	│ start   │ -p functional-667319 --alsologtostderr -v=8                                                                                                             │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:26 UTC │                     │
	│ cache   │ functional-667319 cache add registry.k8s.io/pause:3.1                                                                                                   │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:32 UTC │ 09 Dec 25 04:33 UTC │
	│ cache   │ functional-667319 cache add registry.k8s.io/pause:3.3                                                                                                   │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:33 UTC │ 09 Dec 25 04:33 UTC │
	│ cache   │ functional-667319 cache add registry.k8s.io/pause:latest                                                                                                │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:33 UTC │ 09 Dec 25 04:33 UTC │
	│ cache   │ functional-667319 cache add minikube-local-cache-test:functional-667319                                                                                 │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:33 UTC │ 09 Dec 25 04:33 UTC │
	│ cache   │ functional-667319 cache delete minikube-local-cache-test:functional-667319                                                                              │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:33 UTC │ 09 Dec 25 04:33 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 09 Dec 25 04:33 UTC │ 09 Dec 25 04:33 UTC │
	│ cache   │ list                                                                                                                                                    │ minikube          │ jenkins │ v1.37.0 │ 09 Dec 25 04:33 UTC │ 09 Dec 25 04:33 UTC │
	│ ssh     │ functional-667319 ssh sudo crictl images                                                                                                                │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:33 UTC │ 09 Dec 25 04:33 UTC │
	│ ssh     │ functional-667319 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                      │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:33 UTC │ 09 Dec 25 04:33 UTC │
	│ ssh     │ functional-667319 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                 │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:33 UTC │                     │
	│ cache   │ functional-667319 cache reload                                                                                                                          │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:33 UTC │ 09 Dec 25 04:33 UTC │
	│ ssh     │ functional-667319 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                 │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:33 UTC │ 09 Dec 25 04:33 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 09 Dec 25 04:33 UTC │ 09 Dec 25 04:33 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                                     │ minikube          │ jenkins │ v1.37.0 │ 09 Dec 25 04:33 UTC │ 09 Dec 25 04:33 UTC │
	│ kubectl │ functional-667319 kubectl -- --context functional-667319 get pods                                                                                       │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:33 UTC │                     │
	│ start   │ -p functional-667319 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                                                │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:33 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/09 04:33:11
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 04:33:11.365325 1193189 out.go:360] Setting OutFile to fd 1 ...
	I1209 04:33:11.365424 1193189 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 04:33:11.365428 1193189 out.go:374] Setting ErrFile to fd 2...
	I1209 04:33:11.365431 1193189 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 04:33:11.365670 1193189 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-1142328/.minikube/bin
	I1209 04:33:11.366033 1193189 out.go:368] Setting JSON to false
	I1209 04:33:11.366848 1193189 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":26115,"bootTime":1765228677,"procs":156,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1209 04:33:11.366902 1193189 start.go:143] virtualization:  
	I1209 04:33:11.370321 1193189 out.go:179] * [functional-667319] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1209 04:33:11.373998 1193189 out.go:179]   - MINIKUBE_LOCATION=22081
	I1209 04:33:11.374082 1193189 notify.go:221] Checking for updates...
	I1209 04:33:11.379822 1193189 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 04:33:11.382611 1193189 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22081-1142328/kubeconfig
	I1209 04:33:11.385432 1193189 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-1142328/.minikube
	I1209 04:33:11.388728 1193189 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1209 04:33:11.391441 1193189 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 04:33:11.394813 1193189 config.go:182] Loaded profile config "functional-667319": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1209 04:33:11.394910 1193189 driver.go:422] Setting default libvirt URI to qemu:///system
	I1209 04:33:11.422551 1193189 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1209 04:33:11.422654 1193189 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 04:33:11.481358 1193189 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-09 04:33:11.472506561 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1209 04:33:11.481459 1193189 docker.go:319] overlay module found
	I1209 04:33:11.484471 1193189 out.go:179] * Using the docker driver based on existing profile
	I1209 04:33:11.487406 1193189 start.go:309] selected driver: docker
	I1209 04:33:11.487427 1193189 start.go:927] validating driver "docker" against &{Name:functional-667319 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-667319 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disa
bleCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 04:33:11.487512 1193189 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 04:33:11.487612 1193189 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 04:33:11.542290 1193189 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-09 04:33:11.533632532 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1209 04:33:11.542703 1193189 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 04:33:11.542726 1193189 cni.go:84] Creating CNI manager for ""
	I1209 04:33:11.542784 1193189 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1209 04:33:11.542826 1193189 start.go:353] cluster config:
	{Name:functional-667319 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-667319 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 04:33:11.546045 1193189 out.go:179] * Starting "functional-667319" primary control-plane node in "functional-667319" cluster
	I1209 04:33:11.548925 1193189 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1209 04:33:11.551638 1193189 out.go:179] * Pulling base image v0.0.48-1765184860-22066 ...
	I1209 04:33:11.554609 1193189 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1209 04:33:11.554645 1193189 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1209 04:33:11.554670 1193189 cache.go:65] Caching tarball of preloaded images
	I1209 04:33:11.554693 1193189 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local docker daemon
	I1209 04:33:11.554756 1193189 preload.go:238] Found /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1209 04:33:11.554765 1193189 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1209 04:33:11.554868 1193189 profile.go:143] Saving config to /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/config.json ...
	I1209 04:33:11.573683 1193189 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local docker daemon, skipping pull
	I1209 04:33:11.573695 1193189 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c exists in daemon, skipping load
	I1209 04:33:11.573713 1193189 cache.go:243] Successfully downloaded all kic artifacts
	I1209 04:33:11.573740 1193189 start.go:360] acquireMachinesLock for functional-667319: {Name:mk6c31f0747796f5f8ac8ea1653d6ee60fe2a47d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 04:33:11.573797 1193189 start.go:364] duration metric: took 42.739µs to acquireMachinesLock for "functional-667319"
	I1209 04:33:11.573815 1193189 start.go:96] Skipping create...Using existing machine configuration
	I1209 04:33:11.573819 1193189 fix.go:54] fixHost starting: 
	I1209 04:33:11.574074 1193189 cli_runner.go:164] Run: docker container inspect functional-667319 --format={{.State.Status}}
	I1209 04:33:11.589947 1193189 fix.go:112] recreateIfNeeded on functional-667319: state=Running err=<nil>
	W1209 04:33:11.589973 1193189 fix.go:138] unexpected machine state, will restart: <nil>
	I1209 04:33:11.593148 1193189 out.go:252] * Updating the running docker "functional-667319" container ...
	I1209 04:33:11.593168 1193189 machine.go:94] provisionDockerMachine start ...
	I1209 04:33:11.593256 1193189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-667319
	I1209 04:33:11.609392 1193189 main.go:143] libmachine: Using SSH client type: native
	I1209 04:33:11.609722 1193189 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 33900 <nil> <nil>}
	I1209 04:33:11.609729 1193189 main.go:143] libmachine: About to run SSH command:
	hostname
	I1209 04:33:11.759408 1193189 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-667319
	
	I1209 04:33:11.759422 1193189 ubuntu.go:182] provisioning hostname "functional-667319"
	I1209 04:33:11.759483 1193189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-667319
	I1209 04:33:11.776859 1193189 main.go:143] libmachine: Using SSH client type: native
	I1209 04:33:11.777189 1193189 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 33900 <nil> <nil>}
	I1209 04:33:11.777198 1193189 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-667319 && echo "functional-667319" | sudo tee /etc/hostname
	I1209 04:33:11.939211 1193189 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-667319
	
	I1209 04:33:11.939295 1193189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-667319
	I1209 04:33:11.957143 1193189 main.go:143] libmachine: Using SSH client type: native
	I1209 04:33:11.957494 1193189 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 33900 <nil> <nil>}
	I1209 04:33:11.957508 1193189 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-667319' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-667319/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-667319' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 04:33:12.113237 1193189 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1209 04:33:12.113254 1193189 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22081-1142328/.minikube CaCertPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22081-1142328/.minikube}
	I1209 04:33:12.113278 1193189 ubuntu.go:190] setting up certificates
	I1209 04:33:12.113294 1193189 provision.go:84] configureAuth start
	I1209 04:33:12.113362 1193189 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-667319
	I1209 04:33:12.130912 1193189 provision.go:143] copyHostCerts
	I1209 04:33:12.131003 1193189 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.pem, removing ...
	I1209 04:33:12.131010 1193189 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.pem
	I1209 04:33:12.131086 1193189 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.pem (1078 bytes)
	I1209 04:33:12.131177 1193189 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-1142328/.minikube/cert.pem, removing ...
	I1209 04:33:12.131181 1193189 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-1142328/.minikube/cert.pem
	I1209 04:33:12.131205 1193189 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22081-1142328/.minikube/cert.pem (1123 bytes)
	I1209 04:33:12.131250 1193189 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-1142328/.minikube/key.pem, removing ...
	I1209 04:33:12.131254 1193189 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-1142328/.minikube/key.pem
	I1209 04:33:12.131276 1193189 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22081-1142328/.minikube/key.pem (1675 bytes)
	I1209 04:33:12.131318 1193189 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca-key.pem org=jenkins.functional-667319 san=[127.0.0.1 192.168.49.2 functional-667319 localhost minikube]
	I1209 04:33:12.827484 1193189 provision.go:177] copyRemoteCerts
	I1209 04:33:12.827535 1193189 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 04:33:12.827573 1193189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-667319
	I1209 04:33:12.846654 1193189 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/functional-667319/id_rsa Username:docker}
	I1209 04:33:12.951639 1193189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1209 04:33:12.968320 1193189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1209 04:33:12.985745 1193189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1209 04:33:13.004711 1193189 provision.go:87] duration metric: took 891.395644ms to configureAuth
	I1209 04:33:13.004730 1193189 ubuntu.go:206] setting minikube options for container-runtime
	I1209 04:33:13.005000 1193189 config.go:182] Loaded profile config "functional-667319": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1209 04:33:13.005006 1193189 machine.go:97] duration metric: took 1.411833664s to provisionDockerMachine
	I1209 04:33:13.005012 1193189 start.go:293] postStartSetup for "functional-667319" (driver="docker")
	I1209 04:33:13.005022 1193189 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 04:33:13.005072 1193189 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 04:33:13.005108 1193189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-667319
	I1209 04:33:13.023376 1193189 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/functional-667319/id_rsa Username:docker}
	I1209 04:33:13.128032 1193189 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 04:33:13.131471 1193189 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1209 04:33:13.131490 1193189 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1209 04:33:13.131500 1193189 filesync.go:126] Scanning /home/jenkins/minikube-integration/22081-1142328/.minikube/addons for local assets ...
	I1209 04:33:13.131552 1193189 filesync.go:126] Scanning /home/jenkins/minikube-integration/22081-1142328/.minikube/files for local assets ...
	I1209 04:33:13.131625 1193189 filesync.go:149] local asset: /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/ssl/certs/11442312.pem -> 11442312.pem in /etc/ssl/certs
	I1209 04:33:13.131701 1193189 filesync.go:149] local asset: /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/test/nested/copy/1144231/hosts -> hosts in /etc/test/nested/copy/1144231
	I1209 04:33:13.131749 1193189 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1144231
	I1209 04:33:13.139091 1193189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/ssl/certs/11442312.pem --> /etc/ssl/certs/11442312.pem (1708 bytes)
	I1209 04:33:13.156114 1193189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/test/nested/copy/1144231/hosts --> /etc/test/nested/copy/1144231/hosts (40 bytes)
	I1209 04:33:13.173744 1193189 start.go:296] duration metric: took 168.716821ms for postStartSetup
	I1209 04:33:13.173816 1193189 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1209 04:33:13.173854 1193189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-667319
	I1209 04:33:13.198555 1193189 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/functional-667319/id_rsa Username:docker}
	I1209 04:33:13.300903 1193189 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1209 04:33:13.305102 1193189 fix.go:56] duration metric: took 1.731276319s for fixHost
	I1209 04:33:13.305116 1193189 start.go:83] releasing machines lock for "functional-667319", held for 1.731312428s
	I1209 04:33:13.305216 1193189 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-667319
	I1209 04:33:13.322301 1193189 ssh_runner.go:195] Run: cat /version.json
	I1209 04:33:13.322356 1193189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-667319
	I1209 04:33:13.322602 1193189 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 04:33:13.322654 1193189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-667319
	I1209 04:33:13.345854 1193189 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/functional-667319/id_rsa Username:docker}
	I1209 04:33:13.346808 1193189 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/functional-667319/id_rsa Username:docker}
	I1209 04:33:13.447601 1193189 ssh_runner.go:195] Run: systemctl --version
	I1209 04:33:13.537710 1193189 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 04:33:13.542181 1193189 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 04:33:13.542253 1193189 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 04:33:13.550371 1193189 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1209 04:33:13.550385 1193189 start.go:496] detecting cgroup driver to use...
	I1209 04:33:13.550417 1193189 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1209 04:33:13.550479 1193189 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1209 04:33:13.565987 1193189 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1209 04:33:13.579220 1193189 docker.go:218] disabling cri-docker service (if available) ...
	I1209 04:33:13.579279 1193189 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 04:33:13.594632 1193189 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 04:33:13.607810 1193189 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 04:33:13.745867 1193189 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 04:33:13.855372 1193189 docker.go:234] disabling docker service ...
	I1209 04:33:13.855434 1193189 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 04:33:13.878271 1193189 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 04:33:13.891442 1193189 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 04:33:14.014618 1193189 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 04:33:14.144235 1193189 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 04:33:14.157713 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 04:33:14.171634 1193189 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1209 04:33:14.180595 1193189 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1209 04:33:14.189855 1193189 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1209 04:33:14.189928 1193189 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1209 04:33:14.198663 1193189 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1209 04:33:14.207241 1193189 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1209 04:33:14.215864 1193189 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1209 04:33:14.224572 1193189 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 04:33:14.232585 1193189 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1209 04:33:14.241204 1193189 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1209 04:33:14.249919 1193189 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1209 04:33:14.258812 1193189 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 04:33:14.266241 1193189 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 04:33:14.273587 1193189 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 04:33:14.393428 1193189 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1209 04:33:14.528665 1193189 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1209 04:33:14.528726 1193189 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1209 04:33:14.532955 1193189 start.go:564] Will wait 60s for crictl version
	I1209 04:33:14.533056 1193189 ssh_runner.go:195] Run: which crictl
	I1209 04:33:14.541891 1193189 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1209 04:33:14.570282 1193189 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1209 04:33:14.570350 1193189 ssh_runner.go:195] Run: containerd --version
	I1209 04:33:14.592081 1193189 ssh_runner.go:195] Run: containerd --version
	I1209 04:33:14.617312 1193189 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1209 04:33:14.620294 1193189 cli_runner.go:164] Run: docker network inspect functional-667319 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1209 04:33:14.636105 1193189 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1209 04:33:14.643286 1193189 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1209 04:33:14.646097 1193189 kubeadm.go:884] updating cluster {Name:functional-667319 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-667319 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bina
ryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 04:33:14.646234 1193189 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1209 04:33:14.646312 1193189 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 04:33:14.671604 1193189 containerd.go:627] all images are preloaded for containerd runtime.
	I1209 04:33:14.671615 1193189 containerd.go:534] Images already preloaded, skipping extraction
	I1209 04:33:14.671676 1193189 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 04:33:14.702360 1193189 containerd.go:627] all images are preloaded for containerd runtime.
	I1209 04:33:14.702371 1193189 cache_images.go:86] Images are preloaded, skipping loading
	I1209 04:33:14.702376 1193189 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
	I1209 04:33:14.702482 1193189 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-667319 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-667319 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 04:33:14.702549 1193189 ssh_runner.go:195] Run: sudo crictl info
	I1209 04:33:14.731154 1193189 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1209 04:33:14.731172 1193189 cni.go:84] Creating CNI manager for ""
	I1209 04:33:14.731179 1193189 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1209 04:33:14.731190 1193189 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1209 04:33:14.731212 1193189 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-667319 NodeName:functional-667319 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false Kubel
etConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1209 04:33:14.731316 1193189 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-667319"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 04:33:14.731385 1193189 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1209 04:33:14.742794 1193189 binaries.go:51] Found k8s binaries, skipping transfer
	I1209 04:33:14.742854 1193189 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1209 04:33:14.750345 1193189 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1209 04:33:14.763345 1193189 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1209 04:33:14.775780 1193189 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2087 bytes)
	I1209 04:33:14.788798 1193189 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1209 04:33:14.792560 1193189 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 04:33:14.907792 1193189 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 04:33:15.431459 1193189 certs.go:69] Setting up /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319 for IP: 192.168.49.2
	I1209 04:33:15.431470 1193189 certs.go:195] generating shared ca certs ...
	I1209 04:33:15.431485 1193189 certs.go:227] acquiring lock for ca certs: {Name:mk15788702f8c4e23b5aeab3f44961d296fab259 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 04:33:15.431654 1193189 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.key
	I1209 04:33:15.431695 1193189 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/proxy-client-ca.key
	I1209 04:33:15.431701 1193189 certs.go:257] generating profile certs ...
	I1209 04:33:15.431782 1193189 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/client.key
	I1209 04:33:15.431840 1193189 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/apiserver.key.c80eb595
	I1209 04:33:15.431875 1193189 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/proxy-client.key
	I1209 04:33:15.431982 1193189 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/1144231.pem (1338 bytes)
	W1209 04:33:15.432037 1193189 certs.go:480] ignoring /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/1144231_empty.pem, impossibly tiny 0 bytes
	I1209 04:33:15.432046 1193189 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca-key.pem (1679 bytes)
	I1209 04:33:15.432075 1193189 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem (1078 bytes)
	I1209 04:33:15.432099 1193189 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/cert.pem (1123 bytes)
	I1209 04:33:15.432147 1193189 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/key.pem (1675 bytes)
	I1209 04:33:15.432195 1193189 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/ssl/certs/11442312.pem (1708 bytes)
	I1209 04:33:15.432796 1193189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 04:33:15.450868 1193189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1209 04:33:15.469951 1193189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 04:33:15.488029 1193189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1209 04:33:15.507676 1193189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1209 04:33:15.528269 1193189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1209 04:33:15.547354 1193189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 04:33:15.565510 1193189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1209 04:33:15.583378 1193189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/ssl/certs/11442312.pem --> /usr/share/ca-certificates/11442312.pem (1708 bytes)
	I1209 04:33:15.601546 1193189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 04:33:15.619028 1193189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/1144231.pem --> /usr/share/ca-certificates/1144231.pem (1338 bytes)
	I1209 04:33:15.636618 1193189 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 04:33:15.649310 1193189 ssh_runner.go:195] Run: openssl version
	I1209 04:33:15.655222 1193189 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/11442312.pem
	I1209 04:33:15.662530 1193189 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/11442312.pem /etc/ssl/certs/11442312.pem
	I1209 04:33:15.670168 1193189 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11442312.pem
	I1209 04:33:15.673829 1193189 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 04:18 /usr/share/ca-certificates/11442312.pem
	I1209 04:33:15.673881 1193189 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11442312.pem
	I1209 04:33:15.715756 1193189 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1209 04:33:15.723175 1193189 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1209 04:33:15.730584 1193189 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1209 04:33:15.738232 1193189 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 04:33:15.742081 1193189 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 04:09 /usr/share/ca-certificates/minikubeCA.pem
	I1209 04:33:15.742141 1193189 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 04:33:15.786133 1193189 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1209 04:33:15.793720 1193189 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1144231.pem
	I1209 04:33:15.801263 1193189 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1144231.pem /etc/ssl/certs/1144231.pem
	I1209 04:33:15.808357 1193189 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1144231.pem
	I1209 04:33:15.812098 1193189 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 04:18 /usr/share/ca-certificates/1144231.pem
	I1209 04:33:15.812149 1193189 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1144231.pem
	I1209 04:33:15.854297 1193189 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1209 04:33:15.861740 1193189 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 04:33:15.865303 1193189 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1209 04:33:15.905838 1193189 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1209 04:33:15.946617 1193189 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1209 04:33:15.987357 1193189 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1209 04:33:16.032170 1193189 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1209 04:33:16.075134 1193189 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1209 04:33:16.116540 1193189 kubeadm.go:401] StartCluster: {Name:functional-667319 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-667319 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 04:33:16.116615 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1209 04:33:16.116676 1193189 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 04:33:16.141721 1193189 cri.go:89] found id: ""
	I1209 04:33:16.141780 1193189 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1209 04:33:16.149204 1193189 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1209 04:33:16.149214 1193189 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1209 04:33:16.149263 1193189 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1209 04:33:16.156279 1193189 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1209 04:33:16.156783 1193189 kubeconfig.go:125] found "functional-667319" server: "https://192.168.49.2:8441"
	I1209 04:33:16.159840 1193189 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1209 04:33:16.167426 1193189 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-09 04:18:41.945308258 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-09 04:33:14.782796805 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1209 04:33:16.167445 1193189 kubeadm.go:1161] stopping kube-system containers ...
	I1209 04:33:16.167459 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I1209 04:33:16.167517 1193189 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 04:33:16.201963 1193189 cri.go:89] found id: ""
	I1209 04:33:16.202024 1193189 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1209 04:33:16.219973 1193189 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 04:33:16.227472 1193189 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5635 Dec  9 04:22 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Dec  9 04:22 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5672 Dec  9 04:22 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5588 Dec  9 04:22 /etc/kubernetes/scheduler.conf
	
	I1209 04:33:16.227532 1193189 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1209 04:33:16.234796 1193189 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1209 04:33:16.241862 1193189 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1209 04:33:16.241916 1193189 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 04:33:16.249083 1193189 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1209 04:33:16.256206 1193189 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1209 04:33:16.256261 1193189 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 04:33:16.263352 1193189 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1209 04:33:16.270362 1193189 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1209 04:33:16.270416 1193189 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 04:33:16.277706 1193189 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 04:33:16.285107 1193189 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 04:33:16.327899 1193189 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 04:33:17.810490 1193189 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.482563431s)
	I1209 04:33:17.810548 1193189 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1209 04:33:18.017563 1193189 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 04:33:18.086202 1193189 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1209 04:33:18.134715 1193189 api_server.go:52] waiting for apiserver process to appear ...
	I1209 04:33:18.134785 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:18.635261 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:19.135782 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:19.634982 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:20.134970 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:20.634979 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:21.134982 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:21.634901 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:22.135638 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:22.635624 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:23.134983 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:23.634978 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:24.135473 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:24.634966 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:25.135742 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:25.635347 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:26.134954 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:26.635380 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:27.134976 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:27.635752 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:28.135296 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:28.634924 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:29.134984 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:29.635367 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:30.135822 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:30.635721 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:31.135397 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:31.635633 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:32.134956 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:32.634993 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:33.135921 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:33.635624 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:34.134951 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:34.635593 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:35.134950 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:35.634961 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:36.134953 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:36.634877 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:37.135675 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:37.634982 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:38.135060 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:38.635809 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:39.135591 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:39.634959 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:40.135841 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:40.635611 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:41.135199 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:41.635170 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:42.134924 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:42.634948 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:43.135679 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:43.635637 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:44.134963 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:44.634963 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:45.135229 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:45.635702 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:46.134937 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:46.634881 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:47.135215 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:47.634980 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:48.134999 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:48.635744 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:49.135351 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:49.634915 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:50.135024 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:50.634852 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:51.134961 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:51.635396 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:52.135636 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:52.635513 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:53.135240 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:53.634952 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:54.135504 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:54.634869 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:55.135747 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:55.635267 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:56.135830 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:56.635547 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:57.134988 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:57.635506 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:58.135689 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:58.634992 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:59.135820 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:59.635373 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:00.135881 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:00.634984 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:01.135667 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:01.635758 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:02.135376 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:02.635880 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:03.135850 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:03.635021 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:04.135603 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:04.634975 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:05.135311 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:05.635291 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:06.135867 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:06.635018 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:07.135547 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:07.634967 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:08.134945 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:08.634950 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:09.135735 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:09.635308 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:10.135291 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:10.635185 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:11.134976 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:11.635433 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:12.134976 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:12.634985 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:13.134972 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:13.634991 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:14.135750 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:14.635398 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:15.135547 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:15.635003 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:16.135840 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:16.635833 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:17.135311 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:17.635902 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:18.135877 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:34:18.135980 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:34:18.160417 1193189 cri.go:89] found id: ""
	I1209 04:34:18.160431 1193189 logs.go:282] 0 containers: []
	W1209 04:34:18.160438 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:34:18.160442 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:34:18.160499 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:34:18.186014 1193189 cri.go:89] found id: ""
	I1209 04:34:18.186028 1193189 logs.go:282] 0 containers: []
	W1209 04:34:18.186035 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:34:18.186040 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:34:18.186102 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:34:18.209963 1193189 cri.go:89] found id: ""
	I1209 04:34:18.209977 1193189 logs.go:282] 0 containers: []
	W1209 04:34:18.209983 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:34:18.209989 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:34:18.210048 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:34:18.234704 1193189 cri.go:89] found id: ""
	I1209 04:34:18.234723 1193189 logs.go:282] 0 containers: []
	W1209 04:34:18.234730 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:34:18.234737 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:34:18.234794 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:34:18.260085 1193189 cri.go:89] found id: ""
	I1209 04:34:18.260100 1193189 logs.go:282] 0 containers: []
	W1209 04:34:18.260107 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:34:18.260112 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:34:18.260170 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:34:18.284959 1193189 cri.go:89] found id: ""
	I1209 04:34:18.284972 1193189 logs.go:282] 0 containers: []
	W1209 04:34:18.284978 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:34:18.284983 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:34:18.285040 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:34:18.313883 1193189 cri.go:89] found id: ""
	I1209 04:34:18.313898 1193189 logs.go:282] 0 containers: []
	W1209 04:34:18.313905 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:34:18.313912 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:34:18.313923 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:34:18.330120 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:34:18.330138 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:34:18.391936 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:34:18.383205   10728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:18.383825   10728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:18.385661   10728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:18.386372   10728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:18.388209   10728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:34:18.383205   10728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:18.383825   10728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:18.385661   10728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:18.386372   10728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:18.388209   10728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:34:18.391947 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:34:18.391957 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:34:18.457339 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:34:18.457361 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:34:18.484687 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:34:18.484702 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:34:21.045358 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:21.056486 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:34:21.056551 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:34:21.085673 1193189 cri.go:89] found id: ""
	I1209 04:34:21.085687 1193189 logs.go:282] 0 containers: []
	W1209 04:34:21.085693 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:34:21.085699 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:34:21.085758 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:34:21.111043 1193189 cri.go:89] found id: ""
	I1209 04:34:21.111056 1193189 logs.go:282] 0 containers: []
	W1209 04:34:21.111063 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:34:21.111068 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:34:21.111128 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:34:21.137031 1193189 cri.go:89] found id: ""
	I1209 04:34:21.137044 1193189 logs.go:282] 0 containers: []
	W1209 04:34:21.137051 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:34:21.137057 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:34:21.137118 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:34:21.161998 1193189 cri.go:89] found id: ""
	I1209 04:34:21.162012 1193189 logs.go:282] 0 containers: []
	W1209 04:34:21.162019 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:34:21.162024 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:34:21.162088 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:34:21.185710 1193189 cri.go:89] found id: ""
	I1209 04:34:21.185733 1193189 logs.go:282] 0 containers: []
	W1209 04:34:21.185740 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:34:21.185745 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:34:21.185805 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:34:21.209921 1193189 cri.go:89] found id: ""
	I1209 04:34:21.209934 1193189 logs.go:282] 0 containers: []
	W1209 04:34:21.209941 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:34:21.209946 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:34:21.210007 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:34:21.237263 1193189 cri.go:89] found id: ""
	I1209 04:34:21.237277 1193189 logs.go:282] 0 containers: []
	W1209 04:34:21.237284 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:34:21.237291 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:34:21.237302 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:34:21.253947 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:34:21.253964 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:34:21.323683 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:34:21.314716   10833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:21.315370   10833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:21.316976   10833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:21.317539   10833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:21.318471   10833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:34:21.314716   10833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:21.315370   10833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:21.316976   10833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:21.317539   10833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:21.318471   10833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:34:21.323693 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:34:21.323704 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:34:21.385947 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:34:21.385968 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:34:21.414692 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:34:21.414709 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:34:23.972329 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:23.982273 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:34:23.982333 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:34:24.008968 1193189 cri.go:89] found id: ""
	I1209 04:34:24.008983 1193189 logs.go:282] 0 containers: []
	W1209 04:34:24.008997 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:34:24.009002 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:34:24.009067 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:34:24.035053 1193189 cri.go:89] found id: ""
	I1209 04:34:24.035067 1193189 logs.go:282] 0 containers: []
	W1209 04:34:24.035074 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:34:24.035082 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:34:24.035155 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:34:24.060177 1193189 cri.go:89] found id: ""
	I1209 04:34:24.060202 1193189 logs.go:282] 0 containers: []
	W1209 04:34:24.060210 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:34:24.060215 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:34:24.060278 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:34:24.087352 1193189 cri.go:89] found id: ""
	I1209 04:34:24.087365 1193189 logs.go:282] 0 containers: []
	W1209 04:34:24.087372 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:34:24.087377 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:34:24.087436 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:34:24.112436 1193189 cri.go:89] found id: ""
	I1209 04:34:24.112450 1193189 logs.go:282] 0 containers: []
	W1209 04:34:24.112457 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:34:24.112463 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:34:24.112523 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:34:24.138043 1193189 cri.go:89] found id: ""
	I1209 04:34:24.138057 1193189 logs.go:282] 0 containers: []
	W1209 04:34:24.138063 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:34:24.138068 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:34:24.138127 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:34:24.162473 1193189 cri.go:89] found id: ""
	I1209 04:34:24.162486 1193189 logs.go:282] 0 containers: []
	W1209 04:34:24.162493 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:34:24.162501 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:34:24.162512 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:34:24.218725 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:34:24.218750 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:34:24.237014 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:34:24.237032 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:34:24.301761 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:34:24.293159   10942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:24.293842   10942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:24.295579   10942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:24.296219   10942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:24.297932   10942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:34:24.293159   10942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:24.293842   10942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:24.295579   10942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:24.296219   10942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:24.297932   10942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:34:24.301771 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:34:24.301782 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:34:24.364794 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:34:24.364819 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:34:26.896098 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:26.905998 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:34:26.906059 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:34:26.937370 1193189 cri.go:89] found id: ""
	I1209 04:34:26.937384 1193189 logs.go:282] 0 containers: []
	W1209 04:34:26.937390 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:34:26.937395 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:34:26.937455 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:34:26.961993 1193189 cri.go:89] found id: ""
	I1209 04:34:26.962006 1193189 logs.go:282] 0 containers: []
	W1209 04:34:26.962013 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:34:26.962018 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:34:26.962075 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:34:26.991456 1193189 cri.go:89] found id: ""
	I1209 04:34:26.991470 1193189 logs.go:282] 0 containers: []
	W1209 04:34:26.991476 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:34:26.991495 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:34:26.991554 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:34:27.018891 1193189 cri.go:89] found id: ""
	I1209 04:34:27.018904 1193189 logs.go:282] 0 containers: []
	W1209 04:34:27.018911 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:34:27.018916 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:34:27.018974 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:34:27.043050 1193189 cri.go:89] found id: ""
	I1209 04:34:27.043064 1193189 logs.go:282] 0 containers: []
	W1209 04:34:27.043070 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:34:27.043083 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:34:27.043141 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:34:27.069538 1193189 cri.go:89] found id: ""
	I1209 04:34:27.069553 1193189 logs.go:282] 0 containers: []
	W1209 04:34:27.069559 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:34:27.069564 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:34:27.069624 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:34:27.092560 1193189 cri.go:89] found id: ""
	I1209 04:34:27.092573 1193189 logs.go:282] 0 containers: []
	W1209 04:34:27.092580 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:34:27.092588 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:34:27.092597 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:34:27.149471 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:34:27.149509 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:34:27.166396 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:34:27.166413 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:34:27.233147 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:34:27.224772   11045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:27.225484   11045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:27.227169   11045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:27.227648   11045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:27.229172   11045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:34:27.224772   11045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:27.225484   11045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:27.227169   11045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:27.227648   11045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:27.229172   11045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:34:27.233160 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:34:27.233171 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:34:27.300582 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:34:27.300607 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:34:29.831076 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:29.841031 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:34:29.841110 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:34:29.870054 1193189 cri.go:89] found id: ""
	I1209 04:34:29.870068 1193189 logs.go:282] 0 containers: []
	W1209 04:34:29.870074 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:34:29.870080 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:34:29.870148 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:34:29.893884 1193189 cri.go:89] found id: ""
	I1209 04:34:29.893897 1193189 logs.go:282] 0 containers: []
	W1209 04:34:29.893904 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:34:29.893909 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:34:29.893984 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:34:29.917545 1193189 cri.go:89] found id: ""
	I1209 04:34:29.917559 1193189 logs.go:282] 0 containers: []
	W1209 04:34:29.917565 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:34:29.917570 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:34:29.917636 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:34:29.948707 1193189 cri.go:89] found id: ""
	I1209 04:34:29.948721 1193189 logs.go:282] 0 containers: []
	W1209 04:34:29.948727 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:34:29.948733 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:34:29.948792 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:34:29.988977 1193189 cri.go:89] found id: ""
	I1209 04:34:29.988990 1193189 logs.go:282] 0 containers: []
	W1209 04:34:29.988997 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:34:29.989003 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:34:29.989058 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:34:30.029614 1193189 cri.go:89] found id: ""
	I1209 04:34:30.029653 1193189 logs.go:282] 0 containers: []
	W1209 04:34:30.029660 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:34:30.029666 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:34:30.029747 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:34:30.057862 1193189 cri.go:89] found id: ""
	I1209 04:34:30.057877 1193189 logs.go:282] 0 containers: []
	W1209 04:34:30.057884 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:34:30.057892 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:34:30.057903 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:34:30.125643 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:34:30.125665 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:34:30.154365 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:34:30.154393 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:34:30.218342 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:34:30.218370 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:34:30.235415 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:34:30.235438 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:34:30.300328 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:34:30.292511   11158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:30.293159   11158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:30.294635   11158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:30.295041   11158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:30.296473   11158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:34:30.292511   11158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:30.293159   11158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:30.294635   11158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:30.295041   11158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:30.296473   11158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:34:32.800607 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:32.810690 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:34:32.810752 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:34:32.837030 1193189 cri.go:89] found id: ""
	I1209 04:34:32.837045 1193189 logs.go:282] 0 containers: []
	W1209 04:34:32.837052 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:34:32.837058 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:34:32.837136 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:34:32.863207 1193189 cri.go:89] found id: ""
	I1209 04:34:32.863221 1193189 logs.go:282] 0 containers: []
	W1209 04:34:32.863227 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:34:32.863242 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:34:32.863302 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:34:32.888280 1193189 cri.go:89] found id: ""
	I1209 04:34:32.888294 1193189 logs.go:282] 0 containers: []
	W1209 04:34:32.888301 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:34:32.888306 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:34:32.888365 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:34:32.912361 1193189 cri.go:89] found id: ""
	I1209 04:34:32.912375 1193189 logs.go:282] 0 containers: []
	W1209 04:34:32.912381 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:34:32.912387 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:34:32.912447 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:34:32.944341 1193189 cri.go:89] found id: ""
	I1209 04:34:32.944355 1193189 logs.go:282] 0 containers: []
	W1209 04:34:32.944363 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:34:32.944368 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:34:32.944427 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:34:32.974577 1193189 cri.go:89] found id: ""
	I1209 04:34:32.974592 1193189 logs.go:282] 0 containers: []
	W1209 04:34:32.974599 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:34:32.974604 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:34:32.974667 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:34:33.007167 1193189 cri.go:89] found id: ""
	I1209 04:34:33.007182 1193189 logs.go:282] 0 containers: []
	W1209 04:34:33.007188 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:34:33.007197 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:34:33.007208 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:34:33.072653 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:34:33.064421   11240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:33.065259   11240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:33.066881   11240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:33.067179   11240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:33.068654   11240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:34:33.064421   11240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:33.065259   11240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:33.066881   11240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:33.067179   11240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:33.068654   11240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:34:33.072662 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:34:33.072674 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:34:33.135053 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:34:33.135075 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:34:33.166357 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:34:33.166374 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:34:33.223824 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:34:33.223844 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:34:35.741231 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:35.751318 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:34:35.751378 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:34:35.776735 1193189 cri.go:89] found id: ""
	I1209 04:34:35.776749 1193189 logs.go:282] 0 containers: []
	W1209 04:34:35.776755 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:34:35.776760 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:34:35.776825 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:34:35.805165 1193189 cri.go:89] found id: ""
	I1209 04:34:35.805178 1193189 logs.go:282] 0 containers: []
	W1209 04:34:35.805185 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:34:35.805190 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:34:35.805255 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:34:35.834579 1193189 cri.go:89] found id: ""
	I1209 04:34:35.834592 1193189 logs.go:282] 0 containers: []
	W1209 04:34:35.834599 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:34:35.834604 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:34:35.834668 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:34:35.864666 1193189 cri.go:89] found id: ""
	I1209 04:34:35.864680 1193189 logs.go:282] 0 containers: []
	W1209 04:34:35.864687 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:34:35.864692 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:34:35.864753 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:34:35.888987 1193189 cri.go:89] found id: ""
	I1209 04:34:35.889001 1193189 logs.go:282] 0 containers: []
	W1209 04:34:35.889008 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:34:35.889013 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:34:35.889073 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:34:35.913760 1193189 cri.go:89] found id: ""
	I1209 04:34:35.913774 1193189 logs.go:282] 0 containers: []
	W1209 04:34:35.913781 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:34:35.913787 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:34:35.913848 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:34:35.953491 1193189 cri.go:89] found id: ""
	I1209 04:34:35.953504 1193189 logs.go:282] 0 containers: []
	W1209 04:34:35.953511 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:34:35.953519 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:34:35.953529 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:34:36.017926 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:34:36.017947 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:34:36.036525 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:34:36.036542 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:34:36.100279 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:34:36.091110   11352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:36.091738   11352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:36.093351   11352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:36.093993   11352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:36.095716   11352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:34:36.091110   11352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:36.091738   11352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:36.093351   11352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:36.093993   11352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:36.095716   11352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:34:36.100289 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:34:36.100302 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:34:36.165176 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:34:36.165198 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:34:38.692274 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:38.702150 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:34:38.702209 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:34:38.727703 1193189 cri.go:89] found id: ""
	I1209 04:34:38.727718 1193189 logs.go:282] 0 containers: []
	W1209 04:34:38.727725 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:34:38.727739 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:34:38.727802 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:34:38.752490 1193189 cri.go:89] found id: ""
	I1209 04:34:38.752509 1193189 logs.go:282] 0 containers: []
	W1209 04:34:38.752515 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:34:38.752521 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:34:38.752582 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:34:38.776648 1193189 cri.go:89] found id: ""
	I1209 04:34:38.776662 1193189 logs.go:282] 0 containers: []
	W1209 04:34:38.776668 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:34:38.776676 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:34:38.776735 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:34:38.801762 1193189 cri.go:89] found id: ""
	I1209 04:34:38.801775 1193189 logs.go:282] 0 containers: []
	W1209 04:34:38.801782 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:34:38.801788 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:34:38.801849 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:34:38.825649 1193189 cri.go:89] found id: ""
	I1209 04:34:38.825662 1193189 logs.go:282] 0 containers: []
	W1209 04:34:38.825668 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:34:38.825673 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:34:38.825734 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:34:38.850253 1193189 cri.go:89] found id: ""
	I1209 04:34:38.850268 1193189 logs.go:282] 0 containers: []
	W1209 04:34:38.850274 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:34:38.850280 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:34:38.850342 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:34:38.878018 1193189 cri.go:89] found id: ""
	I1209 04:34:38.878032 1193189 logs.go:282] 0 containers: []
	W1209 04:34:38.878039 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:34:38.878046 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:34:38.878056 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:34:38.937715 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:34:38.937734 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:34:38.956265 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:34:38.956289 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:34:39.027118 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:34:39.019252   11455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:39.020066   11455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:39.021610   11455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:39.021907   11455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:39.023382   11455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:34:39.019252   11455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:39.020066   11455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:39.021610   11455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:39.021907   11455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:39.023382   11455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:34:39.027128 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:34:39.027140 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:34:39.093921 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:34:39.093942 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:34:41.623796 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:41.634102 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:34:41.634167 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:34:41.661702 1193189 cri.go:89] found id: ""
	I1209 04:34:41.661716 1193189 logs.go:282] 0 containers: []
	W1209 04:34:41.661723 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:34:41.661728 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:34:41.661793 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:34:41.686941 1193189 cri.go:89] found id: ""
	I1209 04:34:41.686955 1193189 logs.go:282] 0 containers: []
	W1209 04:34:41.686962 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:34:41.686967 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:34:41.687026 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:34:41.716790 1193189 cri.go:89] found id: ""
	I1209 04:34:41.716805 1193189 logs.go:282] 0 containers: []
	W1209 04:34:41.716813 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:34:41.716818 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:34:41.716881 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:34:41.741120 1193189 cri.go:89] found id: ""
	I1209 04:34:41.741135 1193189 logs.go:282] 0 containers: []
	W1209 04:34:41.741141 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:34:41.741147 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:34:41.741206 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:34:41.765600 1193189 cri.go:89] found id: ""
	I1209 04:34:41.765614 1193189 logs.go:282] 0 containers: []
	W1209 04:34:41.765622 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:34:41.765627 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:34:41.765687 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:34:41.789956 1193189 cri.go:89] found id: ""
	I1209 04:34:41.789971 1193189 logs.go:282] 0 containers: []
	W1209 04:34:41.789978 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:34:41.789983 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:34:41.790047 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:34:41.813854 1193189 cri.go:89] found id: ""
	I1209 04:34:41.813868 1193189 logs.go:282] 0 containers: []
	W1209 04:34:41.813875 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:34:41.813883 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:34:41.813893 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:34:41.869283 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:34:41.869303 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:34:41.886263 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:34:41.886279 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:34:41.966783 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:34:41.957901   11556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:41.958580   11556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:41.960469   11556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:41.961191   11556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:41.962837   11556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:34:41.957901   11556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:41.958580   11556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:41.960469   11556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:41.961191   11556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:41.962837   11556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:34:41.966793 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:34:41.966810 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:34:42.035421 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:34:42.035443 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:34:44.567350 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:44.577592 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:34:44.577656 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:34:44.607032 1193189 cri.go:89] found id: ""
	I1209 04:34:44.607047 1193189 logs.go:282] 0 containers: []
	W1209 04:34:44.607054 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:34:44.607059 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:34:44.607119 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:34:44.632031 1193189 cri.go:89] found id: ""
	I1209 04:34:44.632045 1193189 logs.go:282] 0 containers: []
	W1209 04:34:44.632052 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:34:44.632057 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:34:44.632116 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:34:44.656224 1193189 cri.go:89] found id: ""
	I1209 04:34:44.656237 1193189 logs.go:282] 0 containers: []
	W1209 04:34:44.656244 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:34:44.656249 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:34:44.656308 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:34:44.680302 1193189 cri.go:89] found id: ""
	I1209 04:34:44.680317 1193189 logs.go:282] 0 containers: []
	W1209 04:34:44.680323 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:34:44.680329 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:34:44.680389 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:34:44.705286 1193189 cri.go:89] found id: ""
	I1209 04:34:44.705301 1193189 logs.go:282] 0 containers: []
	W1209 04:34:44.705308 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:34:44.705319 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:34:44.705380 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:34:44.729365 1193189 cri.go:89] found id: ""
	I1209 04:34:44.729378 1193189 logs.go:282] 0 containers: []
	W1209 04:34:44.729385 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:34:44.729391 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:34:44.729452 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:34:44.753588 1193189 cri.go:89] found id: ""
	I1209 04:34:44.753601 1193189 logs.go:282] 0 containers: []
	W1209 04:34:44.753608 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:34:44.753616 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:34:44.753626 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:34:44.809786 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:34:44.809806 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:34:44.827005 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:34:44.827023 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:34:44.888308 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:34:44.880071   11658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:44.880850   11658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:44.882536   11658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:44.882961   11658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:44.884478   11658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:34:44.880071   11658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:44.880850   11658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:44.882536   11658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:44.882961   11658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:44.884478   11658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:34:44.888318 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:34:44.888329 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:34:44.955975 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:34:44.955994 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:34:47.492101 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:47.502461 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:34:47.502521 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:34:47.527075 1193189 cri.go:89] found id: ""
	I1209 04:34:47.527089 1193189 logs.go:282] 0 containers: []
	W1209 04:34:47.527095 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:34:47.527109 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:34:47.527168 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:34:47.552346 1193189 cri.go:89] found id: ""
	I1209 04:34:47.552361 1193189 logs.go:282] 0 containers: []
	W1209 04:34:47.552368 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:34:47.552372 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:34:47.552439 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:34:47.577991 1193189 cri.go:89] found id: ""
	I1209 04:34:47.578005 1193189 logs.go:282] 0 containers: []
	W1209 04:34:47.578011 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:34:47.578017 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:34:47.578077 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:34:47.601711 1193189 cri.go:89] found id: ""
	I1209 04:34:47.601726 1193189 logs.go:282] 0 containers: []
	W1209 04:34:47.601733 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:34:47.601738 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:34:47.601799 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:34:47.626261 1193189 cri.go:89] found id: ""
	I1209 04:34:47.626274 1193189 logs.go:282] 0 containers: []
	W1209 04:34:47.626281 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:34:47.626287 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:34:47.626346 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:34:47.650195 1193189 cri.go:89] found id: ""
	I1209 04:34:47.650209 1193189 logs.go:282] 0 containers: []
	W1209 04:34:47.650215 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:34:47.650222 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:34:47.650289 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:34:47.674818 1193189 cri.go:89] found id: ""
	I1209 04:34:47.674844 1193189 logs.go:282] 0 containers: []
	W1209 04:34:47.674851 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:34:47.674858 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:34:47.674868 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:34:47.730669 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:34:47.730689 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:34:47.747530 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:34:47.747553 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:34:47.809873 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:34:47.800913   11766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:47.801626   11766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:47.803387   11766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:47.804067   11766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:47.805583   11766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:34:47.800913   11766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:47.801626   11766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:47.803387   11766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:47.804067   11766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:47.805583   11766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:34:47.809893 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:34:47.809905 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:34:47.871413 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:34:47.871433 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:34:50.398661 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:50.408687 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:34:50.408759 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:34:50.432488 1193189 cri.go:89] found id: ""
	I1209 04:34:50.432507 1193189 logs.go:282] 0 containers: []
	W1209 04:34:50.432514 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:34:50.432520 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:34:50.432581 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:34:50.456531 1193189 cri.go:89] found id: ""
	I1209 04:34:50.456545 1193189 logs.go:282] 0 containers: []
	W1209 04:34:50.456552 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:34:50.456557 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:34:50.456617 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:34:50.484856 1193189 cri.go:89] found id: ""
	I1209 04:34:50.484871 1193189 logs.go:282] 0 containers: []
	W1209 04:34:50.484878 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:34:50.484884 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:34:50.484946 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:34:50.510277 1193189 cri.go:89] found id: ""
	I1209 04:34:50.510291 1193189 logs.go:282] 0 containers: []
	W1209 04:34:50.510297 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:34:50.510302 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:34:50.510361 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:34:50.533718 1193189 cri.go:89] found id: ""
	I1209 04:34:50.533744 1193189 logs.go:282] 0 containers: []
	W1209 04:34:50.533751 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:34:50.533756 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:34:50.533823 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:34:50.556925 1193189 cri.go:89] found id: ""
	I1209 04:34:50.556939 1193189 logs.go:282] 0 containers: []
	W1209 04:34:50.556945 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:34:50.556951 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:34:50.557010 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:34:50.581553 1193189 cri.go:89] found id: ""
	I1209 04:34:50.581567 1193189 logs.go:282] 0 containers: []
	W1209 04:34:50.581574 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:34:50.581582 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:34:50.581592 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:34:50.640077 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:34:50.640096 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:34:50.657419 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:34:50.657435 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:34:50.717755 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:34:50.710080   11873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:50.710723   11873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:50.711869   11873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:50.712446   11873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:50.713899   11873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:34:50.710080   11873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:50.710723   11873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:50.711869   11873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:50.712446   11873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:50.713899   11873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:34:50.717765 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:34:50.717775 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:34:50.784823 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:34:50.784842 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:34:53.324166 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:53.333904 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:34:53.333963 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:34:53.357773 1193189 cri.go:89] found id: ""
	I1209 04:34:53.357787 1193189 logs.go:282] 0 containers: []
	W1209 04:34:53.357794 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:34:53.357799 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:34:53.357869 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:34:53.381476 1193189 cri.go:89] found id: ""
	I1209 04:34:53.381490 1193189 logs.go:282] 0 containers: []
	W1209 04:34:53.381498 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:34:53.381504 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:34:53.381563 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:34:53.404639 1193189 cri.go:89] found id: ""
	I1209 04:34:53.404653 1193189 logs.go:282] 0 containers: []
	W1209 04:34:53.404671 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:34:53.404677 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:34:53.404737 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:34:53.428572 1193189 cri.go:89] found id: ""
	I1209 04:34:53.428586 1193189 logs.go:282] 0 containers: []
	W1209 04:34:53.428593 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:34:53.428598 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:34:53.428656 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:34:53.453240 1193189 cri.go:89] found id: ""
	I1209 04:34:53.453254 1193189 logs.go:282] 0 containers: []
	W1209 04:34:53.453261 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:34:53.453266 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:34:53.453325 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:34:53.478715 1193189 cri.go:89] found id: ""
	I1209 04:34:53.478728 1193189 logs.go:282] 0 containers: []
	W1209 04:34:53.478735 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:34:53.478740 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:34:53.478798 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:34:53.503483 1193189 cri.go:89] found id: ""
	I1209 04:34:53.503497 1193189 logs.go:282] 0 containers: []
	W1209 04:34:53.503503 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:34:53.503511 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:34:53.503522 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:34:53.569898 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:34:53.560949   11970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:53.561857   11970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:53.563361   11970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:53.563947   11970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:53.565706   11970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:34:53.560949   11970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:53.561857   11970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:53.563361   11970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:53.563947   11970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:53.565706   11970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:34:53.569907 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:34:53.569918 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:34:53.631345 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:34:53.631366 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:34:53.657935 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:34:53.657951 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:34:53.717129 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:34:53.717148 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:34:56.235149 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:56.245451 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:34:56.245512 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:34:56.273858 1193189 cri.go:89] found id: ""
	I1209 04:34:56.273872 1193189 logs.go:282] 0 containers: []
	W1209 04:34:56.273879 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:34:56.273884 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:34:56.273946 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:34:56.299990 1193189 cri.go:89] found id: ""
	I1209 04:34:56.300004 1193189 logs.go:282] 0 containers: []
	W1209 04:34:56.300036 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:34:56.300042 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:34:56.300109 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:34:56.325952 1193189 cri.go:89] found id: ""
	I1209 04:34:56.325965 1193189 logs.go:282] 0 containers: []
	W1209 04:34:56.325972 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:34:56.325977 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:34:56.326044 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:34:56.349999 1193189 cri.go:89] found id: ""
	I1209 04:34:56.350013 1193189 logs.go:282] 0 containers: []
	W1209 04:34:56.350020 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:34:56.350025 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:34:56.350088 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:34:56.376083 1193189 cri.go:89] found id: ""
	I1209 04:34:56.376097 1193189 logs.go:282] 0 containers: []
	W1209 04:34:56.376104 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:34:56.376109 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:34:56.376177 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:34:56.400259 1193189 cri.go:89] found id: ""
	I1209 04:34:56.400273 1193189 logs.go:282] 0 containers: []
	W1209 04:34:56.400280 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:34:56.400293 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:34:56.400352 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:34:56.424757 1193189 cri.go:89] found id: ""
	I1209 04:34:56.424777 1193189 logs.go:282] 0 containers: []
	W1209 04:34:56.424784 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:34:56.424792 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:34:56.424802 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:34:56.453832 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:34:56.453849 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:34:56.512444 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:34:56.512463 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:34:56.531303 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:34:56.531322 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:34:56.595582 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:34:56.587456   12094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:56.588255   12094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:56.589902   12094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:56.590193   12094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:56.591722   12094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:34:56.587456   12094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:56.588255   12094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:56.589902   12094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:56.590193   12094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:56.591722   12094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:34:56.595592 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:34:56.595602 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:34:59.163281 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:59.173117 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:34:59.173176 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:34:59.206232 1193189 cri.go:89] found id: ""
	I1209 04:34:59.206246 1193189 logs.go:282] 0 containers: []
	W1209 04:34:59.206253 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:34:59.206257 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:34:59.206321 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:34:59.239889 1193189 cri.go:89] found id: ""
	I1209 04:34:59.239903 1193189 logs.go:282] 0 containers: []
	W1209 04:34:59.239910 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:34:59.239915 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:34:59.239977 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:34:59.268932 1193189 cri.go:89] found id: ""
	I1209 04:34:59.268946 1193189 logs.go:282] 0 containers: []
	W1209 04:34:59.268953 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:34:59.268958 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:34:59.269019 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:34:59.293191 1193189 cri.go:89] found id: ""
	I1209 04:34:59.293205 1193189 logs.go:282] 0 containers: []
	W1209 04:34:59.293211 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:34:59.293217 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:34:59.293279 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:34:59.317923 1193189 cri.go:89] found id: ""
	I1209 04:34:59.317936 1193189 logs.go:282] 0 containers: []
	W1209 04:34:59.317943 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:34:59.317948 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:34:59.318009 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:34:59.342336 1193189 cri.go:89] found id: ""
	I1209 04:34:59.342350 1193189 logs.go:282] 0 containers: []
	W1209 04:34:59.342356 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:34:59.342361 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:34:59.342419 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:34:59.366502 1193189 cri.go:89] found id: ""
	I1209 04:34:59.366517 1193189 logs.go:282] 0 containers: []
	W1209 04:34:59.366524 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:34:59.366532 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:34:59.366542 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:34:59.422133 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:34:59.422153 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:34:59.439160 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:34:59.439187 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:34:59.506261 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:34:59.497371   12189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:59.498039   12189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:59.499661   12189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:59.500189   12189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:59.501847   12189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:34:59.497371   12189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:59.498039   12189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:59.499661   12189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:59.500189   12189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:59.501847   12189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:34:59.506271 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:34:59.506282 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:34:59.575415 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:34:59.575436 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:35:02.103491 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:35:02.113633 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:35:02.113694 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:35:02.144619 1193189 cri.go:89] found id: ""
	I1209 04:35:02.144633 1193189 logs.go:282] 0 containers: []
	W1209 04:35:02.144640 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:35:02.144646 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:35:02.144705 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:35:02.170344 1193189 cri.go:89] found id: ""
	I1209 04:35:02.170361 1193189 logs.go:282] 0 containers: []
	W1209 04:35:02.170368 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:35:02.170373 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:35:02.170433 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:35:02.197667 1193189 cri.go:89] found id: ""
	I1209 04:35:02.197691 1193189 logs.go:282] 0 containers: []
	W1209 04:35:02.197699 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:35:02.197704 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:35:02.197776 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:35:02.234579 1193189 cri.go:89] found id: ""
	I1209 04:35:02.234593 1193189 logs.go:282] 0 containers: []
	W1209 04:35:02.234600 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:35:02.234605 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:35:02.234676 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:35:02.261734 1193189 cri.go:89] found id: ""
	I1209 04:35:02.261750 1193189 logs.go:282] 0 containers: []
	W1209 04:35:02.261757 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:35:02.261763 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:35:02.261840 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:35:02.287117 1193189 cri.go:89] found id: ""
	I1209 04:35:02.287132 1193189 logs.go:282] 0 containers: []
	W1209 04:35:02.287149 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:35:02.287155 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:35:02.287215 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:35:02.316821 1193189 cri.go:89] found id: ""
	I1209 04:35:02.316841 1193189 logs.go:282] 0 containers: []
	W1209 04:35:02.316887 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:35:02.316894 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:35:02.316908 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:35:02.374344 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:35:02.374364 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:35:02.391657 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:35:02.391675 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:35:02.456609 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:35:02.448842   12295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:02.449370   12295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:02.450865   12295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:02.451343   12295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:02.452897   12295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:35:02.448842   12295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:02.449370   12295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:02.450865   12295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:02.451343   12295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:02.452897   12295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:35:02.456619 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:35:02.456630 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:35:02.522522 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:35:02.522544 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:35:05.052204 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:35:05.062711 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:35:05.062783 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:35:05.088683 1193189 cri.go:89] found id: ""
	I1209 04:35:05.088699 1193189 logs.go:282] 0 containers: []
	W1209 04:35:05.088708 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:35:05.088714 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:35:05.088786 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:35:05.114558 1193189 cri.go:89] found id: ""
	I1209 04:35:05.114573 1193189 logs.go:282] 0 containers: []
	W1209 04:35:05.114580 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:35:05.114585 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:35:05.114647 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:35:05.139679 1193189 cri.go:89] found id: ""
	I1209 04:35:05.139694 1193189 logs.go:282] 0 containers: []
	W1209 04:35:05.139701 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:35:05.139713 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:35:05.139785 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:35:05.165102 1193189 cri.go:89] found id: ""
	I1209 04:35:05.165116 1193189 logs.go:282] 0 containers: []
	W1209 04:35:05.165123 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:35:05.165129 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:35:05.165200 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:35:05.193330 1193189 cri.go:89] found id: ""
	I1209 04:35:05.193354 1193189 logs.go:282] 0 containers: []
	W1209 04:35:05.193361 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:35:05.193366 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:35:05.193434 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:35:05.225572 1193189 cri.go:89] found id: ""
	I1209 04:35:05.225602 1193189 logs.go:282] 0 containers: []
	W1209 04:35:05.225610 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:35:05.225615 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:35:05.225684 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:35:05.253111 1193189 cri.go:89] found id: ""
	I1209 04:35:05.253125 1193189 logs.go:282] 0 containers: []
	W1209 04:35:05.253134 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:35:05.253142 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:35:05.253151 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:35:05.311870 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:35:05.311891 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:35:05.329165 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:35:05.329181 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:35:05.403755 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:35:05.395160   12402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:05.395840   12402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:05.397743   12402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:05.398247   12402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:05.399758   12402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:35:05.395160   12402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:05.395840   12402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:05.397743   12402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:05.398247   12402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:05.399758   12402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:35:05.403765 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:35:05.403778 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:35:05.466140 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:35:05.466163 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:35:08.001482 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:35:08.012555 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:35:08.012621 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:35:08.038489 1193189 cri.go:89] found id: ""
	I1209 04:35:08.038502 1193189 logs.go:282] 0 containers: []
	W1209 04:35:08.038510 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:35:08.038515 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:35:08.038577 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:35:08.063791 1193189 cri.go:89] found id: ""
	I1209 04:35:08.063806 1193189 logs.go:282] 0 containers: []
	W1209 04:35:08.063813 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:35:08.063819 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:35:08.063883 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:35:08.088918 1193189 cri.go:89] found id: ""
	I1209 04:35:08.088933 1193189 logs.go:282] 0 containers: []
	W1209 04:35:08.088940 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:35:08.088945 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:35:08.089006 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:35:08.113601 1193189 cri.go:89] found id: ""
	I1209 04:35:08.113614 1193189 logs.go:282] 0 containers: []
	W1209 04:35:08.113623 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:35:08.113628 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:35:08.113684 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:35:08.136899 1193189 cri.go:89] found id: ""
	I1209 04:35:08.136912 1193189 logs.go:282] 0 containers: []
	W1209 04:35:08.136924 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:35:08.136929 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:35:08.136988 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:35:08.160001 1193189 cri.go:89] found id: ""
	I1209 04:35:08.160050 1193189 logs.go:282] 0 containers: []
	W1209 04:35:08.160057 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:35:08.160062 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:35:08.160119 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:35:08.193362 1193189 cri.go:89] found id: ""
	I1209 04:35:08.193375 1193189 logs.go:282] 0 containers: []
	W1209 04:35:08.193382 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:35:08.193390 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:35:08.193400 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:35:08.255924 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:35:08.255942 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:35:08.274860 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:35:08.274876 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:35:08.341852 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:35:08.333782   12503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:08.334529   12503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:08.336277   12503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:08.336676   12503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:08.338082   12503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:35:08.333782   12503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:08.334529   12503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:08.336277   12503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:08.336676   12503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:08.338082   12503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:35:08.341863 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:35:08.341875 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:35:08.402199 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:35:08.402217 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:35:10.929478 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:35:10.939723 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:35:10.939784 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:35:10.964690 1193189 cri.go:89] found id: ""
	I1209 04:35:10.964704 1193189 logs.go:282] 0 containers: []
	W1209 04:35:10.964711 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:35:10.964716 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:35:10.964796 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:35:10.993239 1193189 cri.go:89] found id: ""
	I1209 04:35:10.993253 1193189 logs.go:282] 0 containers: []
	W1209 04:35:10.993260 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:35:10.993265 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:35:10.993323 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:35:11.019779 1193189 cri.go:89] found id: ""
	I1209 04:35:11.019793 1193189 logs.go:282] 0 containers: []
	W1209 04:35:11.019800 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:35:11.019805 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:35:11.019867 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:35:11.044082 1193189 cri.go:89] found id: ""
	I1209 04:35:11.044095 1193189 logs.go:282] 0 containers: []
	W1209 04:35:11.044104 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:35:11.044109 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:35:11.044170 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:35:11.067732 1193189 cri.go:89] found id: ""
	I1209 04:35:11.067746 1193189 logs.go:282] 0 containers: []
	W1209 04:35:11.067753 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:35:11.067758 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:35:11.067827 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:35:11.094131 1193189 cri.go:89] found id: ""
	I1209 04:35:11.094145 1193189 logs.go:282] 0 containers: []
	W1209 04:35:11.094152 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:35:11.094157 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:35:11.094217 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:35:11.120246 1193189 cri.go:89] found id: ""
	I1209 04:35:11.120261 1193189 logs.go:282] 0 containers: []
	W1209 04:35:11.120269 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:35:11.120277 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:35:11.120288 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:35:11.188699 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:35:11.188719 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:35:11.220249 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:35:11.220272 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:35:11.281813 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:35:11.281834 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:35:11.299608 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:35:11.299624 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:35:11.364974 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:35:11.356357   12622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:11.357044   12622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:11.358808   12622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:11.359431   12622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:11.361160   12622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:35:11.356357   12622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:11.357044   12622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:11.358808   12622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:11.359431   12622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:11.361160   12622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:35:13.865252 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:35:13.875906 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:35:13.875966 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:35:13.901925 1193189 cri.go:89] found id: ""
	I1209 04:35:13.901941 1193189 logs.go:282] 0 containers: []
	W1209 04:35:13.901947 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:35:13.901953 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:35:13.902023 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:35:13.929808 1193189 cri.go:89] found id: ""
	I1209 04:35:13.929823 1193189 logs.go:282] 0 containers: []
	W1209 04:35:13.929830 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:35:13.929835 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:35:13.929896 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:35:13.955030 1193189 cri.go:89] found id: ""
	I1209 04:35:13.955045 1193189 logs.go:282] 0 containers: []
	W1209 04:35:13.955051 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:35:13.955056 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:35:13.955114 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:35:13.979829 1193189 cri.go:89] found id: ""
	I1209 04:35:13.979843 1193189 logs.go:282] 0 containers: []
	W1209 04:35:13.979849 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:35:13.979854 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:35:13.979918 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:35:14.007254 1193189 cri.go:89] found id: ""
	I1209 04:35:14.007269 1193189 logs.go:282] 0 containers: []
	W1209 04:35:14.007275 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:35:14.007281 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:35:14.007345 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:35:14.032915 1193189 cri.go:89] found id: ""
	I1209 04:35:14.032929 1193189 logs.go:282] 0 containers: []
	W1209 04:35:14.032936 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:35:14.032941 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:35:14.032999 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:35:14.061801 1193189 cri.go:89] found id: ""
	I1209 04:35:14.061826 1193189 logs.go:282] 0 containers: []
	W1209 04:35:14.061834 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:35:14.061842 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:35:14.061853 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:35:14.125545 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:35:14.117510   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:14.118249   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:14.119815   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:14.120178   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:14.121732   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:35:14.117510   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:14.118249   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:14.119815   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:14.120178   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:14.121732   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:35:14.125555 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:35:14.125569 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:35:14.192586 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:35:14.192605 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:35:14.223400 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:35:14.223417 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:35:14.284525 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:35:14.284545 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:35:16.802913 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:35:16.812669 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:35:16.812730 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:35:16.836304 1193189 cri.go:89] found id: ""
	I1209 04:35:16.836318 1193189 logs.go:282] 0 containers: []
	W1209 04:35:16.836324 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:35:16.836329 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:35:16.836386 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:35:16.861382 1193189 cri.go:89] found id: ""
	I1209 04:35:16.861396 1193189 logs.go:282] 0 containers: []
	W1209 04:35:16.861403 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:35:16.861407 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:35:16.861467 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:35:16.884827 1193189 cri.go:89] found id: ""
	I1209 04:35:16.884841 1193189 logs.go:282] 0 containers: []
	W1209 04:35:16.884848 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:35:16.884853 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:35:16.884913 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:35:16.907933 1193189 cri.go:89] found id: ""
	I1209 04:35:16.907946 1193189 logs.go:282] 0 containers: []
	W1209 04:35:16.907953 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:35:16.907959 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:35:16.908028 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:35:16.933329 1193189 cri.go:89] found id: ""
	I1209 04:35:16.933344 1193189 logs.go:282] 0 containers: []
	W1209 04:35:16.933350 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:35:16.933355 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:35:16.933418 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:35:16.957725 1193189 cri.go:89] found id: ""
	I1209 04:35:16.957739 1193189 logs.go:282] 0 containers: []
	W1209 04:35:16.957745 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:35:16.957751 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:35:16.957807 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:35:16.981209 1193189 cri.go:89] found id: ""
	I1209 04:35:16.981223 1193189 logs.go:282] 0 containers: []
	W1209 04:35:16.981231 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:35:16.981240 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:35:16.981249 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:35:17.039472 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:35:17.039491 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:35:17.056497 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:35:17.056514 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:35:17.119231 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:35:17.111277   12811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:17.111948   12811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:17.113585   12811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:17.114023   12811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:17.115511   12811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:35:17.111277   12811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:17.111948   12811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:17.113585   12811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:17.114023   12811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:17.115511   12811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:35:17.119240 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:35:17.119251 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:35:17.181494 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:35:17.181513 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:35:19.709396 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:35:19.719323 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:35:19.719388 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:35:19.743245 1193189 cri.go:89] found id: ""
	I1209 04:35:19.743259 1193189 logs.go:282] 0 containers: []
	W1209 04:35:19.743266 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:35:19.743271 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:35:19.743328 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:35:19.767566 1193189 cri.go:89] found id: ""
	I1209 04:35:19.767581 1193189 logs.go:282] 0 containers: []
	W1209 04:35:19.767587 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:35:19.767592 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:35:19.767649 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:35:19.797227 1193189 cri.go:89] found id: ""
	I1209 04:35:19.797241 1193189 logs.go:282] 0 containers: []
	W1209 04:35:19.797248 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:35:19.797253 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:35:19.797311 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:35:19.820451 1193189 cri.go:89] found id: ""
	I1209 04:35:19.820465 1193189 logs.go:282] 0 containers: []
	W1209 04:35:19.820471 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:35:19.820477 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:35:19.820534 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:35:19.844577 1193189 cri.go:89] found id: ""
	I1209 04:35:19.844591 1193189 logs.go:282] 0 containers: []
	W1209 04:35:19.844597 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:35:19.844603 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:35:19.844661 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:35:19.868336 1193189 cri.go:89] found id: ""
	I1209 04:35:19.868350 1193189 logs.go:282] 0 containers: []
	W1209 04:35:19.868356 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:35:19.868362 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:35:19.868430 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:35:19.893016 1193189 cri.go:89] found id: ""
	I1209 04:35:19.893030 1193189 logs.go:282] 0 containers: []
	W1209 04:35:19.893037 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:35:19.893045 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:35:19.893055 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:35:19.947540 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:35:19.947561 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:35:19.964623 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:35:19.964640 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:35:20.041799 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:35:20.033487   12917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:20.034256   12917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:20.035804   12917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:20.036289   12917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:20.037849   12917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:35:20.033487   12917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:20.034256   12917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:20.035804   12917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:20.036289   12917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:20.037849   12917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:35:20.041809 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:35:20.041829 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:35:20.106338 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:35:20.106361 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:35:22.634358 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:35:22.644145 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:35:22.644208 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:35:22.670154 1193189 cri.go:89] found id: ""
	I1209 04:35:22.670171 1193189 logs.go:282] 0 containers: []
	W1209 04:35:22.670178 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:35:22.670189 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:35:22.670255 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:35:22.704705 1193189 cri.go:89] found id: ""
	I1209 04:35:22.704724 1193189 logs.go:282] 0 containers: []
	W1209 04:35:22.704731 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:35:22.704742 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:35:22.704815 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:35:22.729994 1193189 cri.go:89] found id: ""
	I1209 04:35:22.730010 1193189 logs.go:282] 0 containers: []
	W1209 04:35:22.730016 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:35:22.730021 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:35:22.730085 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:35:22.755372 1193189 cri.go:89] found id: ""
	I1209 04:35:22.755386 1193189 logs.go:282] 0 containers: []
	W1209 04:35:22.755393 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:35:22.755399 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:35:22.755468 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:35:22.781698 1193189 cri.go:89] found id: ""
	I1209 04:35:22.781712 1193189 logs.go:282] 0 containers: []
	W1209 04:35:22.781718 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:35:22.781724 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:35:22.781783 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:35:22.810395 1193189 cri.go:89] found id: ""
	I1209 04:35:22.810409 1193189 logs.go:282] 0 containers: []
	W1209 04:35:22.810417 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:35:22.810422 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:35:22.810491 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:35:22.834867 1193189 cri.go:89] found id: ""
	I1209 04:35:22.834881 1193189 logs.go:282] 0 containers: []
	W1209 04:35:22.834888 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:35:22.834896 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:35:22.834914 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:35:22.895493 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:35:22.895514 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:35:22.923338 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:35:22.923355 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:35:22.981048 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:35:22.981069 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:35:22.998202 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:35:22.998221 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:35:23.060221 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:35:23.052398   13038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:23.053078   13038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:23.054527   13038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:23.054989   13038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:23.056396   13038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:35:23.052398   13038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:23.053078   13038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:23.054527   13038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:23.054989   13038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:23.056396   13038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:35:25.561920 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:35:25.571773 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:35:25.571837 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:35:25.595193 1193189 cri.go:89] found id: ""
	I1209 04:35:25.595207 1193189 logs.go:282] 0 containers: []
	W1209 04:35:25.595215 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:35:25.595220 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:35:25.595285 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:35:25.619637 1193189 cri.go:89] found id: ""
	I1209 04:35:25.619651 1193189 logs.go:282] 0 containers: []
	W1209 04:35:25.619658 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:35:25.619664 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:35:25.619726 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:35:25.644298 1193189 cri.go:89] found id: ""
	I1209 04:35:25.644313 1193189 logs.go:282] 0 containers: []
	W1209 04:35:25.644319 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:35:25.644325 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:35:25.644384 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:35:25.668990 1193189 cri.go:89] found id: ""
	I1209 04:35:25.669003 1193189 logs.go:282] 0 containers: []
	W1209 04:35:25.669011 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:35:25.669016 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:35:25.669078 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:35:25.693184 1193189 cri.go:89] found id: ""
	I1209 04:35:25.693199 1193189 logs.go:282] 0 containers: []
	W1209 04:35:25.693206 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:35:25.693211 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:35:25.693269 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:35:25.718924 1193189 cri.go:89] found id: ""
	I1209 04:35:25.718939 1193189 logs.go:282] 0 containers: []
	W1209 04:35:25.718946 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:35:25.718951 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:35:25.719014 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:35:25.744270 1193189 cri.go:89] found id: ""
	I1209 04:35:25.744287 1193189 logs.go:282] 0 containers: []
	W1209 04:35:25.744294 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:35:25.744303 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:35:25.744313 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:35:25.775297 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:35:25.775312 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:35:25.830399 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:35:25.830417 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:35:25.846995 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:35:25.847011 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:35:25.907973 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:35:25.899536   13140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:25.899964   13140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:25.901112   13140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:25.902600   13140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:25.903121   13140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:35:25.899536   13140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:25.899964   13140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:25.901112   13140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:25.902600   13140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:25.903121   13140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:35:25.908000 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:35:25.908009 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:35:28.475800 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:35:28.486363 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:35:28.486434 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:35:28.511630 1193189 cri.go:89] found id: ""
	I1209 04:35:28.511649 1193189 logs.go:282] 0 containers: []
	W1209 04:35:28.511657 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:35:28.511662 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:35:28.511734 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:35:28.539616 1193189 cri.go:89] found id: ""
	I1209 04:35:28.539631 1193189 logs.go:282] 0 containers: []
	W1209 04:35:28.539638 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:35:28.539643 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:35:28.539704 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:35:28.563311 1193189 cri.go:89] found id: ""
	I1209 04:35:28.563325 1193189 logs.go:282] 0 containers: []
	W1209 04:35:28.563333 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:35:28.563338 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:35:28.563399 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:35:28.591490 1193189 cri.go:89] found id: ""
	I1209 04:35:28.591504 1193189 logs.go:282] 0 containers: []
	W1209 04:35:28.591511 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:35:28.591516 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:35:28.591574 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:35:28.614638 1193189 cri.go:89] found id: ""
	I1209 04:35:28.614653 1193189 logs.go:282] 0 containers: []
	W1209 04:35:28.614660 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:35:28.614665 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:35:28.614729 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:35:28.638698 1193189 cri.go:89] found id: ""
	I1209 04:35:28.638712 1193189 logs.go:282] 0 containers: []
	W1209 04:35:28.638720 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:35:28.638727 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:35:28.638788 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:35:28.665819 1193189 cri.go:89] found id: ""
	I1209 04:35:28.665837 1193189 logs.go:282] 0 containers: []
	W1209 04:35:28.665843 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:35:28.665851 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:35:28.665861 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:35:28.693372 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:35:28.693387 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:35:28.750183 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:35:28.750203 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:35:28.768641 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:35:28.768659 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:35:28.832332 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:35:28.823785   13239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:28.824261   13239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:28.826084   13239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:28.826770   13239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:28.828406   13239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:35:28.823785   13239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:28.824261   13239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:28.826084   13239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:28.826770   13239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:28.828406   13239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:35:28.832342 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:35:28.832352 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:35:31.394797 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:35:31.404399 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:35:31.404459 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:35:31.427865 1193189 cri.go:89] found id: ""
	I1209 04:35:31.427879 1193189 logs.go:282] 0 containers: []
	W1209 04:35:31.427886 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:35:31.427893 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:35:31.427957 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:35:31.465245 1193189 cri.go:89] found id: ""
	I1209 04:35:31.465259 1193189 logs.go:282] 0 containers: []
	W1209 04:35:31.465266 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:35:31.465271 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:35:31.465333 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:35:31.499189 1193189 cri.go:89] found id: ""
	I1209 04:35:31.499202 1193189 logs.go:282] 0 containers: []
	W1209 04:35:31.499209 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:35:31.499215 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:35:31.499272 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:35:31.525936 1193189 cri.go:89] found id: ""
	I1209 04:35:31.525950 1193189 logs.go:282] 0 containers: []
	W1209 04:35:31.525958 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:35:31.525963 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:35:31.526023 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:35:31.550933 1193189 cri.go:89] found id: ""
	I1209 04:35:31.550948 1193189 logs.go:282] 0 containers: []
	W1209 04:35:31.550955 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:35:31.550960 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:35:31.551019 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:35:31.574667 1193189 cri.go:89] found id: ""
	I1209 04:35:31.574681 1193189 logs.go:282] 0 containers: []
	W1209 04:35:31.574689 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:35:31.574694 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:35:31.574754 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:35:31.599346 1193189 cri.go:89] found id: ""
	I1209 04:35:31.599360 1193189 logs.go:282] 0 containers: []
	W1209 04:35:31.599367 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:35:31.599374 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:35:31.599384 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:35:31.625893 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:35:31.625912 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:35:31.681164 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:35:31.681181 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:35:31.697997 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:35:31.698014 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:35:31.765231 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:35:31.757080   13343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:31.757463   13343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:31.759010   13343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:31.759311   13343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:31.760784   13343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:35:31.757080   13343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:31.757463   13343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:31.759010   13343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:31.759311   13343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:31.760784   13343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:35:31.765242 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:35:31.765253 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:35:34.325149 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:35:34.334839 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:35:34.334897 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:35:34.359238 1193189 cri.go:89] found id: ""
	I1209 04:35:34.359251 1193189 logs.go:282] 0 containers: []
	W1209 04:35:34.359258 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:35:34.359263 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:35:34.359324 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:35:34.383217 1193189 cri.go:89] found id: ""
	I1209 04:35:34.383231 1193189 logs.go:282] 0 containers: []
	W1209 04:35:34.383237 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:35:34.383242 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:35:34.383301 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:35:34.407421 1193189 cri.go:89] found id: ""
	I1209 04:35:34.407435 1193189 logs.go:282] 0 containers: []
	W1209 04:35:34.407442 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:35:34.407454 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:35:34.407513 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:35:34.440852 1193189 cri.go:89] found id: ""
	I1209 04:35:34.440865 1193189 logs.go:282] 0 containers: []
	W1209 04:35:34.440872 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:35:34.440878 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:35:34.440938 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:35:34.474370 1193189 cri.go:89] found id: ""
	I1209 04:35:34.474382 1193189 logs.go:282] 0 containers: []
	W1209 04:35:34.474389 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:35:34.474400 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:35:34.474459 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:35:34.503074 1193189 cri.go:89] found id: ""
	I1209 04:35:34.503088 1193189 logs.go:282] 0 containers: []
	W1209 04:35:34.503095 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:35:34.503103 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:35:34.503160 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:35:34.533672 1193189 cri.go:89] found id: ""
	I1209 04:35:34.533686 1193189 logs.go:282] 0 containers: []
	W1209 04:35:34.533693 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:35:34.533701 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:35:34.533711 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:35:34.550119 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:35:34.550138 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:35:34.614817 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:35:34.606452   13437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:34.606849   13437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:34.608481   13437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:34.609159   13437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:34.610864   13437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:35:34.606452   13437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:34.606849   13437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:34.608481   13437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:34.609159   13437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:34.610864   13437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:35:34.614827 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:35:34.614837 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:35:34.677461 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:35:34.677482 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:35:34.703505 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:35:34.703520 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:35:37.258780 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:35:37.268941 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:35:37.269002 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:35:37.292668 1193189 cri.go:89] found id: ""
	I1209 04:35:37.292682 1193189 logs.go:282] 0 containers: []
	W1209 04:35:37.292689 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:35:37.292694 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:35:37.292757 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:35:37.320157 1193189 cri.go:89] found id: ""
	I1209 04:35:37.320171 1193189 logs.go:282] 0 containers: []
	W1209 04:35:37.320177 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:35:37.320183 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:35:37.320240 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:35:37.343858 1193189 cri.go:89] found id: ""
	I1209 04:35:37.343872 1193189 logs.go:282] 0 containers: []
	W1209 04:35:37.343879 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:35:37.343884 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:35:37.343947 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:35:37.366919 1193189 cri.go:89] found id: ""
	I1209 04:35:37.366932 1193189 logs.go:282] 0 containers: []
	W1209 04:35:37.366939 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:35:37.366945 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:35:37.367003 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:35:37.391330 1193189 cri.go:89] found id: ""
	I1209 04:35:37.391344 1193189 logs.go:282] 0 containers: []
	W1209 04:35:37.391351 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:35:37.391356 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:35:37.391417 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:35:37.414885 1193189 cri.go:89] found id: ""
	I1209 04:35:37.414899 1193189 logs.go:282] 0 containers: []
	W1209 04:35:37.414906 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:35:37.414911 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:35:37.414967 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:35:37.440557 1193189 cri.go:89] found id: ""
	I1209 04:35:37.440570 1193189 logs.go:282] 0 containers: []
	W1209 04:35:37.440577 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:35:37.440585 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:35:37.440595 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:35:37.501076 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:35:37.501094 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:35:37.523552 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:35:37.523569 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:35:37.590387 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:35:37.582017   13547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:37.582700   13547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:37.584424   13547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:37.584939   13547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:37.586547   13547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:35:37.582017   13547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:37.582700   13547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:37.584424   13547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:37.584939   13547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:37.586547   13547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:35:37.590397 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:35:37.590408 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:35:37.653090 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:35:37.653108 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:35:40.184839 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:35:40.195112 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:35:40.195177 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:35:40.221158 1193189 cri.go:89] found id: ""
	I1209 04:35:40.221173 1193189 logs.go:282] 0 containers: []
	W1209 04:35:40.221180 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:35:40.221185 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:35:40.221246 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:35:40.246395 1193189 cri.go:89] found id: ""
	I1209 04:35:40.246415 1193189 logs.go:282] 0 containers: []
	W1209 04:35:40.246422 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:35:40.246428 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:35:40.246487 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:35:40.270697 1193189 cri.go:89] found id: ""
	I1209 04:35:40.270711 1193189 logs.go:282] 0 containers: []
	W1209 04:35:40.270718 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:35:40.270723 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:35:40.270781 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:35:40.295006 1193189 cri.go:89] found id: ""
	I1209 04:35:40.295021 1193189 logs.go:282] 0 containers: []
	W1209 04:35:40.295028 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:35:40.295033 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:35:40.295093 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:35:40.319784 1193189 cri.go:89] found id: ""
	I1209 04:35:40.319797 1193189 logs.go:282] 0 containers: []
	W1209 04:35:40.319804 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:35:40.319810 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:35:40.319872 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:35:40.344094 1193189 cri.go:89] found id: ""
	I1209 04:35:40.344108 1193189 logs.go:282] 0 containers: []
	W1209 04:35:40.344115 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:35:40.344120 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:35:40.344181 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:35:40.368626 1193189 cri.go:89] found id: ""
	I1209 04:35:40.368640 1193189 logs.go:282] 0 containers: []
	W1209 04:35:40.368647 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:35:40.368654 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:35:40.368665 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:35:40.423837 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:35:40.423857 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:35:40.452134 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:35:40.452157 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:35:40.527559 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:35:40.519583   13649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:40.519986   13649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:40.521271   13649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:40.521835   13649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:40.523570   13649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:35:40.519583   13649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:40.519986   13649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:40.521271   13649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:40.521835   13649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:40.523570   13649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:35:40.527610 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:35:40.527620 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:35:40.588474 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:35:40.588495 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:35:43.118634 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:35:43.128671 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:35:43.128738 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:35:43.152143 1193189 cri.go:89] found id: ""
	I1209 04:35:43.152158 1193189 logs.go:282] 0 containers: []
	W1209 04:35:43.152179 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:35:43.152185 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:35:43.152255 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:35:43.176188 1193189 cri.go:89] found id: ""
	I1209 04:35:43.176203 1193189 logs.go:282] 0 containers: []
	W1209 04:35:43.176210 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:35:43.176215 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:35:43.176275 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:35:43.199682 1193189 cri.go:89] found id: ""
	I1209 04:35:43.199696 1193189 logs.go:282] 0 containers: []
	W1209 04:35:43.199702 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:35:43.199707 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:35:43.199767 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:35:43.224229 1193189 cri.go:89] found id: ""
	I1209 04:35:43.224244 1193189 logs.go:282] 0 containers: []
	W1209 04:35:43.224251 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:35:43.224257 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:35:43.224318 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:35:43.249684 1193189 cri.go:89] found id: ""
	I1209 04:35:43.249698 1193189 logs.go:282] 0 containers: []
	W1209 04:35:43.249705 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:35:43.249710 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:35:43.249773 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:35:43.273701 1193189 cri.go:89] found id: ""
	I1209 04:35:43.273715 1193189 logs.go:282] 0 containers: []
	W1209 04:35:43.273724 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:35:43.273729 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:35:43.273790 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:35:43.297360 1193189 cri.go:89] found id: ""
	I1209 04:35:43.297375 1193189 logs.go:282] 0 containers: []
	W1209 04:35:43.297382 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:35:43.297389 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:35:43.297400 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:35:43.323849 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:35:43.323865 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:35:43.380806 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:35:43.380825 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:35:43.397905 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:35:43.397924 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:35:43.474648 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:35:43.464143   13758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:43.464857   13758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:43.468210   13758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:43.468799   13758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:43.470475   13758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:35:43.464143   13758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:43.464857   13758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:43.468210   13758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:43.468799   13758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:43.470475   13758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:35:43.474658 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:35:43.474668 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:35:46.038037 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:35:46.048448 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:35:46.048513 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:35:46.073156 1193189 cri.go:89] found id: ""
	I1209 04:35:46.073170 1193189 logs.go:282] 0 containers: []
	W1209 04:35:46.073177 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:35:46.073182 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:35:46.073246 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:35:46.103227 1193189 cri.go:89] found id: ""
	I1209 04:35:46.103242 1193189 logs.go:282] 0 containers: []
	W1209 04:35:46.103249 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:35:46.103255 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:35:46.103324 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:35:46.126371 1193189 cri.go:89] found id: ""
	I1209 04:35:46.126385 1193189 logs.go:282] 0 containers: []
	W1209 04:35:46.126392 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:35:46.126397 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:35:46.126457 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:35:46.151271 1193189 cri.go:89] found id: ""
	I1209 04:35:46.151284 1193189 logs.go:282] 0 containers: []
	W1209 04:35:46.151291 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:35:46.151296 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:35:46.151354 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:35:46.175057 1193189 cri.go:89] found id: ""
	I1209 04:35:46.175071 1193189 logs.go:282] 0 containers: []
	W1209 04:35:46.175077 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:35:46.175082 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:35:46.175140 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:35:46.203063 1193189 cri.go:89] found id: ""
	I1209 04:35:46.203078 1193189 logs.go:282] 0 containers: []
	W1209 04:35:46.203085 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:35:46.203091 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:35:46.203148 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:35:46.229251 1193189 cri.go:89] found id: ""
	I1209 04:35:46.229267 1193189 logs.go:282] 0 containers: []
	W1209 04:35:46.229274 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:35:46.229281 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:35:46.229291 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:35:46.298699 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:35:46.289900   13846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:46.290515   13846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:46.292304   13846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:46.292640   13846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:46.294235   13846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:35:46.289900   13846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:46.290515   13846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:46.292304   13846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:46.292640   13846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:46.294235   13846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:35:46.298709 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:35:46.298720 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:35:46.363949 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:35:46.363976 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:35:46.391889 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:35:46.391906 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:35:46.454456 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:35:46.454483 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:35:48.975649 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:35:48.985708 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:35:48.985766 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:35:49.011399 1193189 cri.go:89] found id: ""
	I1209 04:35:49.011413 1193189 logs.go:282] 0 containers: []
	W1209 04:35:49.011420 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:35:49.011426 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:35:49.011483 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:35:49.036873 1193189 cri.go:89] found id: ""
	I1209 04:35:49.036887 1193189 logs.go:282] 0 containers: []
	W1209 04:35:49.036894 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:35:49.036899 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:35:49.036960 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:35:49.066005 1193189 cri.go:89] found id: ""
	I1209 04:35:49.066019 1193189 logs.go:282] 0 containers: []
	W1209 04:35:49.066025 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:35:49.066031 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:35:49.066091 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:35:49.093270 1193189 cri.go:89] found id: ""
	I1209 04:35:49.093284 1193189 logs.go:282] 0 containers: []
	W1209 04:35:49.093291 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:35:49.093297 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:35:49.093357 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:35:49.116583 1193189 cri.go:89] found id: ""
	I1209 04:35:49.116597 1193189 logs.go:282] 0 containers: []
	W1209 04:35:49.116604 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:35:49.116609 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:35:49.116667 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:35:49.141295 1193189 cri.go:89] found id: ""
	I1209 04:35:49.141309 1193189 logs.go:282] 0 containers: []
	W1209 04:35:49.141316 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:35:49.141321 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:35:49.141382 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:35:49.164496 1193189 cri.go:89] found id: ""
	I1209 04:35:49.164509 1193189 logs.go:282] 0 containers: []
	W1209 04:35:49.164516 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:35:49.164524 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:35:49.164533 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:35:49.220406 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:35:49.220426 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:35:49.237143 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:35:49.237159 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:35:49.305702 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:35:49.296253   13955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:49.297596   13955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:49.298689   13955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:49.299456   13955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:49.301121   13955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:35:49.296253   13955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:49.297596   13955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:49.298689   13955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:49.299456   13955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:49.301121   13955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:35:49.305724 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:35:49.305737 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:35:49.367200 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:35:49.367219 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:35:51.895283 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:35:51.905706 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:35:51.905765 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:35:51.929677 1193189 cri.go:89] found id: ""
	I1209 04:35:51.929691 1193189 logs.go:282] 0 containers: []
	W1209 04:35:51.929698 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:35:51.929703 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:35:51.929764 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:35:51.953232 1193189 cri.go:89] found id: ""
	I1209 04:35:51.953246 1193189 logs.go:282] 0 containers: []
	W1209 04:35:51.953252 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:35:51.953257 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:35:51.953314 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:35:51.979515 1193189 cri.go:89] found id: ""
	I1209 04:35:51.979528 1193189 logs.go:282] 0 containers: []
	W1209 04:35:51.979535 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:35:51.979540 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:35:51.979601 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:35:52.009061 1193189 cri.go:89] found id: ""
	I1209 04:35:52.009075 1193189 logs.go:282] 0 containers: []
	W1209 04:35:52.009082 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:35:52.009087 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:35:52.009154 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:35:52.036289 1193189 cri.go:89] found id: ""
	I1209 04:35:52.036309 1193189 logs.go:282] 0 containers: []
	W1209 04:35:52.036316 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:35:52.036321 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:35:52.036386 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:35:52.061853 1193189 cri.go:89] found id: ""
	I1209 04:35:52.061867 1193189 logs.go:282] 0 containers: []
	W1209 04:35:52.061874 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:35:52.061879 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:35:52.061942 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:35:52.090416 1193189 cri.go:89] found id: ""
	I1209 04:35:52.090443 1193189 logs.go:282] 0 containers: []
	W1209 04:35:52.090451 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:35:52.090459 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:35:52.090469 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:35:52.120980 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:35:52.120996 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:35:52.177079 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:35:52.177098 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:35:52.195520 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:35:52.195537 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:35:52.260151 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:35:52.251913   14072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:52.252734   14072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:52.254403   14072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:52.254982   14072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:52.256470   14072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:35:52.251913   14072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:52.252734   14072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:52.254403   14072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:52.254982   14072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:52.256470   14072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:35:52.260161 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:35:52.260172 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:35:54.821803 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:35:54.831356 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:35:54.831415 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:35:54.855283 1193189 cri.go:89] found id: ""
	I1209 04:35:54.855298 1193189 logs.go:282] 0 containers: []
	W1209 04:35:54.855304 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:35:54.855309 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:35:54.855369 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:35:54.889160 1193189 cri.go:89] found id: ""
	I1209 04:35:54.889174 1193189 logs.go:282] 0 containers: []
	W1209 04:35:54.889181 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:35:54.889186 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:35:54.889245 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:35:54.912925 1193189 cri.go:89] found id: ""
	I1209 04:35:54.912939 1193189 logs.go:282] 0 containers: []
	W1209 04:35:54.912946 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:35:54.912951 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:35:54.913019 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:35:54.937856 1193189 cri.go:89] found id: ""
	I1209 04:35:54.937869 1193189 logs.go:282] 0 containers: []
	W1209 04:35:54.937876 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:35:54.937881 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:35:54.937939 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:35:54.961607 1193189 cri.go:89] found id: ""
	I1209 04:35:54.961620 1193189 logs.go:282] 0 containers: []
	W1209 04:35:54.961626 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:35:54.961632 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:35:54.961692 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:35:54.984614 1193189 cri.go:89] found id: ""
	I1209 04:35:54.984627 1193189 logs.go:282] 0 containers: []
	W1209 04:35:54.984634 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:35:54.984639 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:35:54.984702 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:35:55.019938 1193189 cri.go:89] found id: ""
	I1209 04:35:55.019952 1193189 logs.go:282] 0 containers: []
	W1209 04:35:55.019959 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:35:55.019967 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:35:55.019977 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:35:55.076703 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:35:55.076722 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:35:55.094781 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:35:55.094801 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:35:55.164076 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:35:55.155994   14165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:55.156899   14165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:55.158415   14165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:55.158819   14165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:55.160056   14165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:35:55.155994   14165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:55.156899   14165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:55.158415   14165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:55.158819   14165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:55.160056   14165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:35:55.164088 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:35:55.164098 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:35:55.225429 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:35:55.225451 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:35:57.756131 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:35:57.766096 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:35:57.766152 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:35:57.794059 1193189 cri.go:89] found id: ""
	I1209 04:35:57.794073 1193189 logs.go:282] 0 containers: []
	W1209 04:35:57.794080 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:35:57.794085 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:35:57.794142 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:35:57.817501 1193189 cri.go:89] found id: ""
	I1209 04:35:57.817514 1193189 logs.go:282] 0 containers: []
	W1209 04:35:57.817520 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:35:57.817526 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:35:57.817582 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:35:57.841800 1193189 cri.go:89] found id: ""
	I1209 04:35:57.841814 1193189 logs.go:282] 0 containers: []
	W1209 04:35:57.841821 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:35:57.841841 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:35:57.841905 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:35:57.865096 1193189 cri.go:89] found id: ""
	I1209 04:35:57.865109 1193189 logs.go:282] 0 containers: []
	W1209 04:35:57.865116 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:35:57.865122 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:35:57.865185 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:35:57.889214 1193189 cri.go:89] found id: ""
	I1209 04:35:57.889227 1193189 logs.go:282] 0 containers: []
	W1209 04:35:57.889234 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:35:57.889240 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:35:57.889299 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:35:57.913077 1193189 cri.go:89] found id: ""
	I1209 04:35:57.913090 1193189 logs.go:282] 0 containers: []
	W1209 04:35:57.913097 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:35:57.913102 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:35:57.913164 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:35:57.938101 1193189 cri.go:89] found id: ""
	I1209 04:35:57.938114 1193189 logs.go:282] 0 containers: []
	W1209 04:35:57.938121 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:35:57.938129 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:35:57.938139 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:35:57.968546 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:35:57.968563 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:35:58.025605 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:35:58.025626 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:35:58.042537 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:35:58.042554 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:35:58.112285 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:35:58.104144   14281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:58.104837   14281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:58.106385   14281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:58.106802   14281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:58.108456   14281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:35:58.104144   14281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:58.104837   14281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:58.106385   14281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:58.106802   14281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:58.108456   14281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:35:58.112295 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:35:58.112317 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:36:00.674623 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:36:00.684871 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:36:00.684932 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:36:00.723046 1193189 cri.go:89] found id: ""
	I1209 04:36:00.723060 1193189 logs.go:282] 0 containers: []
	W1209 04:36:00.723067 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:36:00.723082 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:36:00.723142 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:36:00.755063 1193189 cri.go:89] found id: ""
	I1209 04:36:00.755077 1193189 logs.go:282] 0 containers: []
	W1209 04:36:00.755094 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:36:00.755100 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:36:00.755170 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:36:00.780343 1193189 cri.go:89] found id: ""
	I1209 04:36:00.780357 1193189 logs.go:282] 0 containers: []
	W1209 04:36:00.780368 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:36:00.780373 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:36:00.780432 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:36:00.805177 1193189 cri.go:89] found id: ""
	I1209 04:36:00.805191 1193189 logs.go:282] 0 containers: []
	W1209 04:36:00.805198 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:36:00.805203 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:36:00.805261 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:36:00.829413 1193189 cri.go:89] found id: ""
	I1209 04:36:00.829426 1193189 logs.go:282] 0 containers: []
	W1209 04:36:00.829432 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:36:00.829439 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:36:00.829500 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:36:00.853086 1193189 cri.go:89] found id: ""
	I1209 04:36:00.853100 1193189 logs.go:282] 0 containers: []
	W1209 04:36:00.853107 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:36:00.853112 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:36:00.853185 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:36:00.881064 1193189 cri.go:89] found id: ""
	I1209 04:36:00.881078 1193189 logs.go:282] 0 containers: []
	W1209 04:36:00.881085 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:36:00.881093 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:36:00.881103 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:36:00.950102 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:36:00.942130   14365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:00.942767   14365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:00.944430   14365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:00.944779   14365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:00.946290   14365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:36:00.942130   14365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:00.942767   14365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:00.944430   14365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:00.944779   14365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:00.946290   14365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:36:00.950112 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:36:00.950123 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:36:01.012065 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:36:01.012086 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:36:01.041323 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:36:01.041339 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:36:01.099024 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:36:01.099044 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:36:03.616785 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:36:03.626636 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:36:03.626697 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:36:03.650973 1193189 cri.go:89] found id: ""
	I1209 04:36:03.650987 1193189 logs.go:282] 0 containers: []
	W1209 04:36:03.650994 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:36:03.650999 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:36:03.651060 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:36:03.674678 1193189 cri.go:89] found id: ""
	I1209 04:36:03.674692 1193189 logs.go:282] 0 containers: []
	W1209 04:36:03.674699 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:36:03.674705 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:36:03.674777 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:36:03.705193 1193189 cri.go:89] found id: ""
	I1209 04:36:03.705206 1193189 logs.go:282] 0 containers: []
	W1209 04:36:03.705213 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:36:03.705218 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:36:03.705281 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:36:03.733013 1193189 cri.go:89] found id: ""
	I1209 04:36:03.733026 1193189 logs.go:282] 0 containers: []
	W1209 04:36:03.733033 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:36:03.733038 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:36:03.733096 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:36:03.770375 1193189 cri.go:89] found id: ""
	I1209 04:36:03.770389 1193189 logs.go:282] 0 containers: []
	W1209 04:36:03.770396 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:36:03.770401 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:36:03.770457 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:36:03.793967 1193189 cri.go:89] found id: ""
	I1209 04:36:03.793980 1193189 logs.go:282] 0 containers: []
	W1209 04:36:03.793987 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:36:03.793992 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:36:03.794053 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:36:03.818652 1193189 cri.go:89] found id: ""
	I1209 04:36:03.818666 1193189 logs.go:282] 0 containers: []
	W1209 04:36:03.818672 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:36:03.818681 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:36:03.818691 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:36:03.873671 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:36:03.873692 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:36:03.890142 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:36:03.890159 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:36:03.958206 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:36:03.950384   14475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:03.950766   14475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:03.952402   14475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:03.952806   14475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:03.954365   14475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:36:03.950384   14475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:03.950766   14475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:03.952402   14475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:03.952806   14475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:03.954365   14475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:36:03.958216 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:36:03.958227 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:36:04.019401 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:36:04.019421 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:36:06.551878 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:36:06.561600 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:36:06.561657 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:36:06.585277 1193189 cri.go:89] found id: ""
	I1209 04:36:06.585291 1193189 logs.go:282] 0 containers: []
	W1209 04:36:06.585298 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:36:06.585304 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:36:06.585366 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:36:06.613401 1193189 cri.go:89] found id: ""
	I1209 04:36:06.613415 1193189 logs.go:282] 0 containers: []
	W1209 04:36:06.613421 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:36:06.613426 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:36:06.613483 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:36:06.642329 1193189 cri.go:89] found id: ""
	I1209 04:36:06.642342 1193189 logs.go:282] 0 containers: []
	W1209 04:36:06.642349 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:36:06.642354 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:36:06.642413 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:36:06.666445 1193189 cri.go:89] found id: ""
	I1209 04:36:06.666458 1193189 logs.go:282] 0 containers: []
	W1209 04:36:06.666465 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:36:06.666470 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:36:06.666527 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:36:06.695405 1193189 cri.go:89] found id: ""
	I1209 04:36:06.695419 1193189 logs.go:282] 0 containers: []
	W1209 04:36:06.695425 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:36:06.695431 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:36:06.695488 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:36:06.734331 1193189 cri.go:89] found id: ""
	I1209 04:36:06.734345 1193189 logs.go:282] 0 containers: []
	W1209 04:36:06.734361 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:36:06.734372 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:36:06.734441 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:36:06.766210 1193189 cri.go:89] found id: ""
	I1209 04:36:06.766223 1193189 logs.go:282] 0 containers: []
	W1209 04:36:06.766231 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:36:06.766238 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:36:06.766248 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:36:06.822607 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:36:06.822627 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:36:06.839326 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:36:06.839342 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:36:06.900387 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:36:06.892243   14581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:06.892630   14581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:06.894401   14581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:06.894869   14581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:06.896343   14581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:36:06.892243   14581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:06.892630   14581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:06.894401   14581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:06.894869   14581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:06.896343   14581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:36:06.900405 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:36:06.900421 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:36:06.961047 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:36:06.961067 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:36:09.488140 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:36:09.498332 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:36:09.498409 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:36:09.523347 1193189 cri.go:89] found id: ""
	I1209 04:36:09.523373 1193189 logs.go:282] 0 containers: []
	W1209 04:36:09.523380 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:36:09.523387 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:36:09.523459 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:36:09.550096 1193189 cri.go:89] found id: ""
	I1209 04:36:09.550111 1193189 logs.go:282] 0 containers: []
	W1209 04:36:09.550117 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:36:09.550123 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:36:09.550185 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:36:09.578695 1193189 cri.go:89] found id: ""
	I1209 04:36:09.578709 1193189 logs.go:282] 0 containers: []
	W1209 04:36:09.578715 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:36:09.578720 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:36:09.578784 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:36:09.607079 1193189 cri.go:89] found id: ""
	I1209 04:36:09.607093 1193189 logs.go:282] 0 containers: []
	W1209 04:36:09.607100 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:36:09.607105 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:36:09.607166 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:36:09.635495 1193189 cri.go:89] found id: ""
	I1209 04:36:09.635510 1193189 logs.go:282] 0 containers: []
	W1209 04:36:09.635516 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:36:09.635521 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:36:09.635584 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:36:09.661747 1193189 cri.go:89] found id: ""
	I1209 04:36:09.661761 1193189 logs.go:282] 0 containers: []
	W1209 04:36:09.661767 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:36:09.661773 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:36:09.661831 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:36:09.694535 1193189 cri.go:89] found id: ""
	I1209 04:36:09.694549 1193189 logs.go:282] 0 containers: []
	W1209 04:36:09.694556 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:36:09.694564 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:36:09.694574 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:36:09.759636 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:36:09.759656 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:36:09.777485 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:36:09.777502 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:36:09.841963 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:36:09.834188   14687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:09.834610   14687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:09.836196   14687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:09.836779   14687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:09.838239   14687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:36:09.834188   14687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:09.834610   14687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:09.836196   14687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:09.836779   14687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:09.838239   14687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:36:09.841974 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:36:09.841984 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:36:09.904615 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:36:09.904636 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:36:12.433539 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:36:12.443370 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:36:12.443435 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:36:12.469616 1193189 cri.go:89] found id: ""
	I1209 04:36:12.469630 1193189 logs.go:282] 0 containers: []
	W1209 04:36:12.469637 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:36:12.469643 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:36:12.469704 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:36:12.493917 1193189 cri.go:89] found id: ""
	I1209 04:36:12.493930 1193189 logs.go:282] 0 containers: []
	W1209 04:36:12.493937 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:36:12.493942 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:36:12.494001 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:36:12.518803 1193189 cri.go:89] found id: ""
	I1209 04:36:12.518817 1193189 logs.go:282] 0 containers: []
	W1209 04:36:12.518842 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:36:12.518848 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:36:12.518917 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:36:12.542764 1193189 cri.go:89] found id: ""
	I1209 04:36:12.542785 1193189 logs.go:282] 0 containers: []
	W1209 04:36:12.542792 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:36:12.542797 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:36:12.542859 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:36:12.566738 1193189 cri.go:89] found id: ""
	I1209 04:36:12.566751 1193189 logs.go:282] 0 containers: []
	W1209 04:36:12.566758 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:36:12.566762 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:36:12.566830 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:36:12.594757 1193189 cri.go:89] found id: ""
	I1209 04:36:12.594772 1193189 logs.go:282] 0 containers: []
	W1209 04:36:12.594778 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:36:12.594784 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:36:12.594850 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:36:12.619407 1193189 cri.go:89] found id: ""
	I1209 04:36:12.619421 1193189 logs.go:282] 0 containers: []
	W1209 04:36:12.619427 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:36:12.619434 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:36:12.619445 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:36:12.692974 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:36:12.683791   14780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:12.684626   14780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:12.686439   14780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:12.687100   14780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:12.688999   14780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:36:12.683791   14780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:12.684626   14780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:12.686439   14780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:12.687100   14780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:12.688999   14780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:36:12.692984 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:36:12.693001 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:36:12.766313 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:36:12.766340 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:36:12.793057 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:36:12.793075 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:36:12.849665 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:36:12.849689 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:36:15.366796 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:36:15.376649 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:36:15.376719 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:36:15.400344 1193189 cri.go:89] found id: ""
	I1209 04:36:15.400358 1193189 logs.go:282] 0 containers: []
	W1209 04:36:15.400372 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:36:15.400378 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:36:15.400437 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:36:15.425809 1193189 cri.go:89] found id: ""
	I1209 04:36:15.425822 1193189 logs.go:282] 0 containers: []
	W1209 04:36:15.425829 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:36:15.425834 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:36:15.425894 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:36:15.450444 1193189 cri.go:89] found id: ""
	I1209 04:36:15.450458 1193189 logs.go:282] 0 containers: []
	W1209 04:36:15.450466 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:36:15.450471 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:36:15.450531 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:36:15.478163 1193189 cri.go:89] found id: ""
	I1209 04:36:15.478178 1193189 logs.go:282] 0 containers: []
	W1209 04:36:15.478185 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:36:15.478190 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:36:15.478261 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:36:15.502360 1193189 cri.go:89] found id: ""
	I1209 04:36:15.502374 1193189 logs.go:282] 0 containers: []
	W1209 04:36:15.502381 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:36:15.502386 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:36:15.502450 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:36:15.530599 1193189 cri.go:89] found id: ""
	I1209 04:36:15.530614 1193189 logs.go:282] 0 containers: []
	W1209 04:36:15.530620 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:36:15.530626 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:36:15.530693 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:36:15.554654 1193189 cri.go:89] found id: ""
	I1209 04:36:15.554668 1193189 logs.go:282] 0 containers: []
	W1209 04:36:15.554675 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:36:15.554683 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:36:15.554693 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:36:15.614962 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:36:15.614982 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:36:15.641417 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:36:15.641433 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:36:15.696674 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:36:15.696692 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:36:15.714032 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:36:15.714047 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:36:15.786226 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:36:15.778061   14907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:15.778499   14907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:15.780149   14907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:15.780759   14907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:15.782381   14907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:36:15.778061   14907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:15.778499   14907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:15.780149   14907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:15.780759   14907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:15.782381   14907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:36:18.286483 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:36:18.296288 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:36:18.296346 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:36:18.323616 1193189 cri.go:89] found id: ""
	I1209 04:36:18.323629 1193189 logs.go:282] 0 containers: []
	W1209 04:36:18.323636 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:36:18.323642 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:36:18.323706 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:36:18.348203 1193189 cri.go:89] found id: ""
	I1209 04:36:18.348218 1193189 logs.go:282] 0 containers: []
	W1209 04:36:18.348225 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:36:18.348231 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:36:18.348290 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:36:18.372639 1193189 cri.go:89] found id: ""
	I1209 04:36:18.372653 1193189 logs.go:282] 0 containers: []
	W1209 04:36:18.372660 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:36:18.372671 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:36:18.372732 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:36:18.400006 1193189 cri.go:89] found id: ""
	I1209 04:36:18.400037 1193189 logs.go:282] 0 containers: []
	W1209 04:36:18.400044 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:36:18.400049 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:36:18.400120 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:36:18.424038 1193189 cri.go:89] found id: ""
	I1209 04:36:18.424053 1193189 logs.go:282] 0 containers: []
	W1209 04:36:18.424060 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:36:18.424068 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:36:18.424135 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:36:18.447692 1193189 cri.go:89] found id: ""
	I1209 04:36:18.447719 1193189 logs.go:282] 0 containers: []
	W1209 04:36:18.447726 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:36:18.447737 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:36:18.447809 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:36:18.473888 1193189 cri.go:89] found id: ""
	I1209 04:36:18.473902 1193189 logs.go:282] 0 containers: []
	W1209 04:36:18.473908 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:36:18.473916 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:36:18.473925 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:36:18.531920 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:36:18.531945 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:36:18.549523 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:36:18.549540 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:36:18.610296 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:36:18.601988   14993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:18.602374   14993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:18.603904   14993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:18.604520   14993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:18.606270   14993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:36:18.601988   14993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:18.602374   14993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:18.603904   14993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:18.604520   14993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:18.606270   14993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:36:18.610306 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:36:18.610316 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:36:18.673185 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:36:18.673204 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:36:21.215945 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:36:21.225779 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:36:21.225842 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:36:21.251614 1193189 cri.go:89] found id: ""
	I1209 04:36:21.251627 1193189 logs.go:282] 0 containers: []
	W1209 04:36:21.251633 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:36:21.251639 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:36:21.251701 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:36:21.274988 1193189 cri.go:89] found id: ""
	I1209 04:36:21.275002 1193189 logs.go:282] 0 containers: []
	W1209 04:36:21.275009 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:36:21.275016 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:36:21.275073 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:36:21.298100 1193189 cri.go:89] found id: ""
	I1209 04:36:21.298113 1193189 logs.go:282] 0 containers: []
	W1209 04:36:21.298120 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:36:21.298125 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:36:21.298188 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:36:21.323043 1193189 cri.go:89] found id: ""
	I1209 04:36:21.323057 1193189 logs.go:282] 0 containers: []
	W1209 04:36:21.323063 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:36:21.323068 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:36:21.323128 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:36:21.346629 1193189 cri.go:89] found id: ""
	I1209 04:36:21.346642 1193189 logs.go:282] 0 containers: []
	W1209 04:36:21.346649 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:36:21.346654 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:36:21.346713 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:36:21.370687 1193189 cri.go:89] found id: ""
	I1209 04:36:21.370700 1193189 logs.go:282] 0 containers: []
	W1209 04:36:21.370707 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:36:21.370712 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:36:21.370767 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:36:21.394774 1193189 cri.go:89] found id: ""
	I1209 04:36:21.394788 1193189 logs.go:282] 0 containers: []
	W1209 04:36:21.394794 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:36:21.394803 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:36:21.394813 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:36:21.458240 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:36:21.449927   15094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:21.450664   15094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:21.452537   15094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:21.452900   15094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:21.454442   15094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:36:21.449927   15094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:21.450664   15094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:21.452537   15094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:21.452900   15094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:21.454442   15094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:36:21.458249 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:36:21.458260 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:36:21.519830 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:36:21.519850 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:36:21.556076 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:36:21.556093 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:36:21.614749 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:36:21.614769 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:36:24.132222 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:36:24.143277 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:36:24.143352 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:36:24.173051 1193189 cri.go:89] found id: ""
	I1209 04:36:24.173065 1193189 logs.go:282] 0 containers: []
	W1209 04:36:24.173072 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:36:24.173077 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:36:24.173134 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:36:24.198407 1193189 cri.go:89] found id: ""
	I1209 04:36:24.198421 1193189 logs.go:282] 0 containers: []
	W1209 04:36:24.198428 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:36:24.198432 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:36:24.198490 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:36:24.224986 1193189 cri.go:89] found id: ""
	I1209 04:36:24.225000 1193189 logs.go:282] 0 containers: []
	W1209 04:36:24.225007 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:36:24.225012 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:36:24.225071 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:36:24.249942 1193189 cri.go:89] found id: ""
	I1209 04:36:24.249957 1193189 logs.go:282] 0 containers: []
	W1209 04:36:24.249964 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:36:24.249969 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:36:24.250031 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:36:24.274252 1193189 cri.go:89] found id: ""
	I1209 04:36:24.274266 1193189 logs.go:282] 0 containers: []
	W1209 04:36:24.274273 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:36:24.274278 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:36:24.274347 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:36:24.302468 1193189 cri.go:89] found id: ""
	I1209 04:36:24.302485 1193189 logs.go:282] 0 containers: []
	W1209 04:36:24.302491 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:36:24.302497 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:36:24.302582 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:36:24.328883 1193189 cri.go:89] found id: ""
	I1209 04:36:24.328898 1193189 logs.go:282] 0 containers: []
	W1209 04:36:24.328905 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:36:24.328913 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:36:24.328923 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:36:24.386082 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:36:24.386102 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:36:24.403782 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:36:24.403798 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:36:24.473588 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:36:24.462744   15202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:24.463330   15202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:24.466259   15202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:24.467650   15202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:24.468411   15202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:36:24.462744   15202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:24.463330   15202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:24.466259   15202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:24.467650   15202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:24.468411   15202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:36:24.473598 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:36:24.473609 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:36:24.534819 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:36:24.534841 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:36:27.064221 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:36:27.074260 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:36:27.074334 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:36:27.098417 1193189 cri.go:89] found id: ""
	I1209 04:36:27.098445 1193189 logs.go:282] 0 containers: []
	W1209 04:36:27.098452 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:36:27.098457 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:36:27.098527 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:36:27.126158 1193189 cri.go:89] found id: ""
	I1209 04:36:27.126172 1193189 logs.go:282] 0 containers: []
	W1209 04:36:27.126184 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:36:27.126189 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:36:27.126250 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:36:27.154258 1193189 cri.go:89] found id: ""
	I1209 04:36:27.154271 1193189 logs.go:282] 0 containers: []
	W1209 04:36:27.154278 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:36:27.154284 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:36:27.154343 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:36:27.179273 1193189 cri.go:89] found id: ""
	I1209 04:36:27.179286 1193189 logs.go:282] 0 containers: []
	W1209 04:36:27.179293 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:36:27.179309 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:36:27.179367 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:36:27.204706 1193189 cri.go:89] found id: ""
	I1209 04:36:27.204720 1193189 logs.go:282] 0 containers: []
	W1209 04:36:27.204727 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:36:27.204732 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:36:27.204791 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:36:27.230005 1193189 cri.go:89] found id: ""
	I1209 04:36:27.230019 1193189 logs.go:282] 0 containers: []
	W1209 04:36:27.230026 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:36:27.230032 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:36:27.230098 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:36:27.254482 1193189 cri.go:89] found id: ""
	I1209 04:36:27.254496 1193189 logs.go:282] 0 containers: []
	W1209 04:36:27.254512 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:36:27.254521 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:36:27.254531 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:36:27.310002 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:36:27.310022 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:36:27.327694 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:36:27.327713 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:36:27.395258 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:36:27.386987   15311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:27.387759   15311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:27.389469   15311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:27.389968   15311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:27.391467   15311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:36:27.386987   15311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:27.387759   15311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:27.389469   15311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:27.389968   15311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:27.391467   15311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:36:27.395269 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:36:27.395279 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:36:27.457675 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:36:27.457694 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:36:29.986185 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:36:30.005634 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:36:30.005711 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:36:30.038694 1193189 cri.go:89] found id: ""
	I1209 04:36:30.038709 1193189 logs.go:282] 0 containers: []
	W1209 04:36:30.038717 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:36:30.038723 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:36:30.038792 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:36:30.065088 1193189 cri.go:89] found id: ""
	I1209 04:36:30.065110 1193189 logs.go:282] 0 containers: []
	W1209 04:36:30.065119 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:36:30.065124 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:36:30.065188 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:36:30.090159 1193189 cri.go:89] found id: ""
	I1209 04:36:30.090173 1193189 logs.go:282] 0 containers: []
	W1209 04:36:30.090180 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:36:30.090185 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:36:30.090250 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:36:30.118708 1193189 cri.go:89] found id: ""
	I1209 04:36:30.118721 1193189 logs.go:282] 0 containers: []
	W1209 04:36:30.118728 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:36:30.118734 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:36:30.118796 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:36:30.146404 1193189 cri.go:89] found id: ""
	I1209 04:36:30.146417 1193189 logs.go:282] 0 containers: []
	W1209 04:36:30.146424 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:36:30.146429 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:36:30.146488 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:36:30.170089 1193189 cri.go:89] found id: ""
	I1209 04:36:30.170102 1193189 logs.go:282] 0 containers: []
	W1209 04:36:30.170109 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:36:30.170114 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:36:30.170171 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:36:30.194303 1193189 cri.go:89] found id: ""
	I1209 04:36:30.194317 1193189 logs.go:282] 0 containers: []
	W1209 04:36:30.194327 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:36:30.194334 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:36:30.194344 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:36:30.230597 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:36:30.230613 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:36:30.285894 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:36:30.285913 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:36:30.303774 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:36:30.303789 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:36:30.370275 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:36:30.361691   15431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:30.362453   15431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:30.364059   15431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:30.364598   15431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:30.366280   15431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:36:30.361691   15431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:30.362453   15431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:30.364059   15431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:30.364598   15431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:30.366280   15431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:36:30.370284 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:36:30.370297 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:36:32.932454 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:36:32.942712 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:36:32.942772 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:36:32.970393 1193189 cri.go:89] found id: ""
	I1209 04:36:32.970406 1193189 logs.go:282] 0 containers: []
	W1209 04:36:32.970413 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:36:32.970418 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:36:32.970480 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:36:33.001462 1193189 cri.go:89] found id: ""
	I1209 04:36:33.001476 1193189 logs.go:282] 0 containers: []
	W1209 04:36:33.001489 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:36:33.001495 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:36:33.001561 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:36:33.027773 1193189 cri.go:89] found id: ""
	I1209 04:36:33.027787 1193189 logs.go:282] 0 containers: []
	W1209 04:36:33.027794 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:36:33.027799 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:36:33.027858 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:36:33.054066 1193189 cri.go:89] found id: ""
	I1209 04:36:33.054080 1193189 logs.go:282] 0 containers: []
	W1209 04:36:33.054086 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:36:33.054091 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:36:33.054152 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:36:33.077043 1193189 cri.go:89] found id: ""
	I1209 04:36:33.077057 1193189 logs.go:282] 0 containers: []
	W1209 04:36:33.077064 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:36:33.077069 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:36:33.077127 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:36:33.101043 1193189 cri.go:89] found id: ""
	I1209 04:36:33.101056 1193189 logs.go:282] 0 containers: []
	W1209 04:36:33.101063 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:36:33.101068 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:36:33.101126 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:36:33.125074 1193189 cri.go:89] found id: ""
	I1209 04:36:33.125088 1193189 logs.go:282] 0 containers: []
	W1209 04:36:33.125096 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:36:33.125104 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:36:33.125115 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:36:33.181829 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:36:33.181849 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:36:33.198599 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:36:33.198616 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:36:33.259348 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:36:33.250653   15527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:33.251506   15527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:33.253061   15527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:33.253668   15527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:33.255199   15527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:36:33.250653   15527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:33.251506   15527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:33.253061   15527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:33.253668   15527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:33.255199   15527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:36:33.259358 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:36:33.259369 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:36:33.321638 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:36:33.321660 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:36:35.847785 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:36:35.857973 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:36:35.858039 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:36:35.882819 1193189 cri.go:89] found id: ""
	I1209 04:36:35.882832 1193189 logs.go:282] 0 containers: []
	W1209 04:36:35.882839 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:36:35.882844 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:36:35.882908 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:36:35.911762 1193189 cri.go:89] found id: ""
	I1209 04:36:35.911776 1193189 logs.go:282] 0 containers: []
	W1209 04:36:35.911784 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:36:35.911789 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:36:35.911849 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:36:35.946631 1193189 cri.go:89] found id: ""
	I1209 04:36:35.946646 1193189 logs.go:282] 0 containers: []
	W1209 04:36:35.946652 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:36:35.946663 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:36:35.946721 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:36:35.972345 1193189 cri.go:89] found id: ""
	I1209 04:36:35.972360 1193189 logs.go:282] 0 containers: []
	W1209 04:36:35.972367 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:36:35.972372 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:36:35.972438 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:36:36.010844 1193189 cri.go:89] found id: ""
	I1209 04:36:36.010859 1193189 logs.go:282] 0 containers: []
	W1209 04:36:36.010867 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:36:36.010876 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:36:36.010940 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:36:36.036297 1193189 cri.go:89] found id: ""
	I1209 04:36:36.036310 1193189 logs.go:282] 0 containers: []
	W1209 04:36:36.036317 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:36:36.036323 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:36:36.036387 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:36:36.066383 1193189 cri.go:89] found id: ""
	I1209 04:36:36.066398 1193189 logs.go:282] 0 containers: []
	W1209 04:36:36.066404 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:36:36.066412 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:36:36.066422 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:36:36.123320 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:36:36.123340 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:36:36.141674 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:36:36.141691 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:36:36.207738 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:36:36.198534   15631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:36.199238   15631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:36.201129   15631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:36.201829   15631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:36.203559   15631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:36:36.198534   15631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:36.199238   15631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:36.201129   15631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:36.201829   15631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:36.203559   15631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:36:36.207749 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:36:36.207760 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:36:36.271530 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:36:36.271553 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:36:38.808031 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:36:38.818384 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:36:38.818445 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:36:38.842672 1193189 cri.go:89] found id: ""
	I1209 04:36:38.842686 1193189 logs.go:282] 0 containers: []
	W1209 04:36:38.842692 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:36:38.842697 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:36:38.842757 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:36:38.867351 1193189 cri.go:89] found id: ""
	I1209 04:36:38.867365 1193189 logs.go:282] 0 containers: []
	W1209 04:36:38.867371 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:36:38.867376 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:36:38.867436 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:36:38.891443 1193189 cri.go:89] found id: ""
	I1209 04:36:38.891456 1193189 logs.go:282] 0 containers: []
	W1209 04:36:38.891463 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:36:38.891469 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:36:38.891530 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:36:38.916345 1193189 cri.go:89] found id: ""
	I1209 04:36:38.916359 1193189 logs.go:282] 0 containers: []
	W1209 04:36:38.916366 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:36:38.916371 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:36:38.916435 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:36:38.949316 1193189 cri.go:89] found id: ""
	I1209 04:36:38.949330 1193189 logs.go:282] 0 containers: []
	W1209 04:36:38.949348 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:36:38.949354 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:36:38.949427 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:36:38.983440 1193189 cri.go:89] found id: ""
	I1209 04:36:38.983453 1193189 logs.go:282] 0 containers: []
	W1209 04:36:38.983472 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:36:38.983479 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:36:38.983548 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:36:39.016431 1193189 cri.go:89] found id: ""
	I1209 04:36:39.016445 1193189 logs.go:282] 0 containers: []
	W1209 04:36:39.016452 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:36:39.016460 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:36:39.016470 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:36:39.072919 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:36:39.072940 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:36:39.091632 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:36:39.091649 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:36:39.155205 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:36:39.147101   15734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:39.147530   15734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:39.149195   15734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:39.149594   15734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:39.151291   15734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:36:39.147101   15734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:39.147530   15734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:39.149195   15734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:39.149594   15734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:39.151291   15734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:36:39.155215 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:36:39.155237 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:36:39.217334 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:36:39.217354 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:36:41.745095 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:36:41.755765 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:36:41.755830 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:36:41.788789 1193189 cri.go:89] found id: ""
	I1209 04:36:41.788815 1193189 logs.go:282] 0 containers: []
	W1209 04:36:41.788821 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:36:41.788827 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:36:41.788905 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:36:41.818341 1193189 cri.go:89] found id: ""
	I1209 04:36:41.818363 1193189 logs.go:282] 0 containers: []
	W1209 04:36:41.818371 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:36:41.818376 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:36:41.818443 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:36:41.847734 1193189 cri.go:89] found id: ""
	I1209 04:36:41.847748 1193189 logs.go:282] 0 containers: []
	W1209 04:36:41.847754 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:36:41.847768 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:36:41.847827 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:36:41.871920 1193189 cri.go:89] found id: ""
	I1209 04:36:41.871943 1193189 logs.go:282] 0 containers: []
	W1209 04:36:41.871950 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:36:41.871955 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:36:41.872035 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:36:41.897849 1193189 cri.go:89] found id: ""
	I1209 04:36:41.897863 1193189 logs.go:282] 0 containers: []
	W1209 04:36:41.897870 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:36:41.897875 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:36:41.897936 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:36:41.923060 1193189 cri.go:89] found id: ""
	I1209 04:36:41.923083 1193189 logs.go:282] 0 containers: []
	W1209 04:36:41.923090 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:36:41.923096 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:36:41.923163 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:36:41.952660 1193189 cri.go:89] found id: ""
	I1209 04:36:41.952684 1193189 logs.go:282] 0 containers: []
	W1209 04:36:41.952692 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:36:41.952699 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:36:41.952709 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:36:42.023725 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:36:42.023763 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:36:42.042594 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:36:42.042613 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:36:42.123707 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:36:42.110165   15835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:42.110742   15835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:42.112376   15835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:42.113625   15835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:42.114588   15835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:36:42.110165   15835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:42.110742   15835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:42.112376   15835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:42.113625   15835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:42.114588   15835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:36:42.123742 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:36:42.123763 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:36:42.205354 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:36:42.205378 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:36:44.742230 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:36:44.752061 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:36:44.752130 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:36:44.777546 1193189 cri.go:89] found id: ""
	I1209 04:36:44.777560 1193189 logs.go:282] 0 containers: []
	W1209 04:36:44.777567 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:36:44.777573 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:36:44.777640 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:36:44.800656 1193189 cri.go:89] found id: ""
	I1209 04:36:44.800670 1193189 logs.go:282] 0 containers: []
	W1209 04:36:44.800677 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:36:44.800681 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:36:44.800746 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:36:44.823629 1193189 cri.go:89] found id: ""
	I1209 04:36:44.823643 1193189 logs.go:282] 0 containers: []
	W1209 04:36:44.823649 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:36:44.823654 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:36:44.823710 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:36:44.847779 1193189 cri.go:89] found id: ""
	I1209 04:36:44.847792 1193189 logs.go:282] 0 containers: []
	W1209 04:36:44.847799 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:36:44.847804 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:36:44.847864 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:36:44.871420 1193189 cri.go:89] found id: ""
	I1209 04:36:44.871434 1193189 logs.go:282] 0 containers: []
	W1209 04:36:44.871441 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:36:44.871446 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:36:44.871502 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:36:44.897429 1193189 cri.go:89] found id: ""
	I1209 04:36:44.897443 1193189 logs.go:282] 0 containers: []
	W1209 04:36:44.897450 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:36:44.897455 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:36:44.897515 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:36:44.921002 1193189 cri.go:89] found id: ""
	I1209 04:36:44.921016 1193189 logs.go:282] 0 containers: []
	W1209 04:36:44.921023 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:36:44.921030 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:36:44.921050 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:36:44.943906 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:36:44.943923 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:36:45.040267 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:36:45.023556   15936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:45.024395   15936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:45.026904   15936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:45.028346   15936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:45.029304   15936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:36:45.023556   15936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:45.024395   15936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:45.026904   15936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:45.028346   15936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:45.029304   15936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:36:45.040278 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:36:45.040290 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:36:45.111615 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:36:45.111641 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:36:45.154764 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:36:45.154783 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:36:47.737899 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:36:47.748114 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:36:47.748183 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:36:47.772307 1193189 cri.go:89] found id: ""
	I1209 04:36:47.772321 1193189 logs.go:282] 0 containers: []
	W1209 04:36:47.772327 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:36:47.772333 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:36:47.772392 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:36:47.796250 1193189 cri.go:89] found id: ""
	I1209 04:36:47.796264 1193189 logs.go:282] 0 containers: []
	W1209 04:36:47.796271 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:36:47.796276 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:36:47.796337 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:36:47.820196 1193189 cri.go:89] found id: ""
	I1209 04:36:47.820209 1193189 logs.go:282] 0 containers: []
	W1209 04:36:47.820217 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:36:47.820222 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:36:47.820279 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:36:47.844179 1193189 cri.go:89] found id: ""
	I1209 04:36:47.844193 1193189 logs.go:282] 0 containers: []
	W1209 04:36:47.844200 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:36:47.844205 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:36:47.844261 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:36:47.871664 1193189 cri.go:89] found id: ""
	I1209 04:36:47.871678 1193189 logs.go:282] 0 containers: []
	W1209 04:36:47.871685 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:36:47.871689 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:36:47.871746 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:36:47.897882 1193189 cri.go:89] found id: ""
	I1209 04:36:47.897896 1193189 logs.go:282] 0 containers: []
	W1209 04:36:47.897902 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:36:47.897907 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:36:47.897968 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:36:47.925663 1193189 cri.go:89] found id: ""
	I1209 04:36:47.925678 1193189 logs.go:282] 0 containers: []
	W1209 04:36:47.925684 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:36:47.925692 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:36:47.925702 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:36:47.982430 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:36:47.982448 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:36:48.003029 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:36:48.003046 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:36:48.081084 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:36:48.072200   16045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:48.073024   16045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:48.074652   16045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:48.075018   16045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:48.076580   16045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:36:48.072200   16045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:48.073024   16045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:48.074652   16045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:48.075018   16045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:48.076580   16045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:36:48.081095 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:36:48.081114 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:36:48.144865 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:36:48.144883 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:36:50.676655 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:36:50.687887 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:36:50.687948 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:36:50.712477 1193189 cri.go:89] found id: ""
	I1209 04:36:50.712492 1193189 logs.go:282] 0 containers: []
	W1209 04:36:50.712498 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:36:50.712504 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:36:50.712560 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:36:50.743459 1193189 cri.go:89] found id: ""
	I1209 04:36:50.743472 1193189 logs.go:282] 0 containers: []
	W1209 04:36:50.743479 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:36:50.743484 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:36:50.743559 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:36:50.769066 1193189 cri.go:89] found id: ""
	I1209 04:36:50.769080 1193189 logs.go:282] 0 containers: []
	W1209 04:36:50.769087 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:36:50.769093 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:36:50.769149 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:36:50.792910 1193189 cri.go:89] found id: ""
	I1209 04:36:50.792924 1193189 logs.go:282] 0 containers: []
	W1209 04:36:50.792931 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:36:50.792942 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:36:50.793002 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:36:50.817006 1193189 cri.go:89] found id: ""
	I1209 04:36:50.817020 1193189 logs.go:282] 0 containers: []
	W1209 04:36:50.817027 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:36:50.817033 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:36:50.817108 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:36:50.840981 1193189 cri.go:89] found id: ""
	I1209 04:36:50.840995 1193189 logs.go:282] 0 containers: []
	W1209 04:36:50.841002 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:36:50.841007 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:36:50.841065 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:36:50.864484 1193189 cri.go:89] found id: ""
	I1209 04:36:50.864498 1193189 logs.go:282] 0 containers: []
	W1209 04:36:50.864504 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:36:50.864512 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:36:50.864522 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:36:50.934409 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:36:50.923680   16138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:50.924264   16138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:50.925919   16138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:50.926350   16138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:50.927812   16138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:36:50.923680   16138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:50.924264   16138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:50.925919   16138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:50.926350   16138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:50.927812   16138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:36:50.934428 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:36:50.934439 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:36:51.007145 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:36:51.007168 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:36:51.035885 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:36:51.035901 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:36:51.094880 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:36:51.094903 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:36:53.613358 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:36:53.623300 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:36:53.623360 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:36:53.649605 1193189 cri.go:89] found id: ""
	I1209 04:36:53.649619 1193189 logs.go:282] 0 containers: []
	W1209 04:36:53.649625 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:36:53.649630 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:36:53.649688 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:36:53.673756 1193189 cri.go:89] found id: ""
	I1209 04:36:53.673771 1193189 logs.go:282] 0 containers: []
	W1209 04:36:53.673777 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:36:53.673782 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:36:53.673841 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:36:53.697312 1193189 cri.go:89] found id: ""
	I1209 04:36:53.697326 1193189 logs.go:282] 0 containers: []
	W1209 04:36:53.697333 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:36:53.697339 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:36:53.697405 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:36:53.721559 1193189 cri.go:89] found id: ""
	I1209 04:36:53.721573 1193189 logs.go:282] 0 containers: []
	W1209 04:36:53.721580 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:36:53.721585 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:36:53.721643 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:36:53.745640 1193189 cri.go:89] found id: ""
	I1209 04:36:53.745654 1193189 logs.go:282] 0 containers: []
	W1209 04:36:53.745661 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:36:53.745666 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:36:53.745724 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:36:53.770072 1193189 cri.go:89] found id: ""
	I1209 04:36:53.770086 1193189 logs.go:282] 0 containers: []
	W1209 04:36:53.770093 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:36:53.770099 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:36:53.770161 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:36:53.793834 1193189 cri.go:89] found id: ""
	I1209 04:36:53.793848 1193189 logs.go:282] 0 containers: []
	W1209 04:36:53.793856 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:36:53.793864 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:36:53.793873 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:36:53.853273 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:36:53.853293 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:36:53.870522 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:36:53.870539 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:36:53.937367 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:36:53.928497   16249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:53.929009   16249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:53.930701   16249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:53.931304   16249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:53.932870   16249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:36:53.928497   16249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:53.929009   16249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:53.930701   16249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:53.931304   16249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:53.932870   16249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:36:53.937377 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:36:53.937387 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:36:54.005219 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:36:54.005240 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:36:56.538809 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:36:56.548679 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:36:56.548738 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:36:56.572505 1193189 cri.go:89] found id: ""
	I1209 04:36:56.572519 1193189 logs.go:282] 0 containers: []
	W1209 04:36:56.572526 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:36:56.572531 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:36:56.572591 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:36:56.596732 1193189 cri.go:89] found id: ""
	I1209 04:36:56.596746 1193189 logs.go:282] 0 containers: []
	W1209 04:36:56.596753 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:36:56.596758 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:36:56.596817 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:36:56.622042 1193189 cri.go:89] found id: ""
	I1209 04:36:56.622056 1193189 logs.go:282] 0 containers: []
	W1209 04:36:56.622063 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:36:56.622068 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:36:56.622125 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:36:56.644865 1193189 cri.go:89] found id: ""
	I1209 04:36:56.644879 1193189 logs.go:282] 0 containers: []
	W1209 04:36:56.644885 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:36:56.644890 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:36:56.644947 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:36:56.670230 1193189 cri.go:89] found id: ""
	I1209 04:36:56.670244 1193189 logs.go:282] 0 containers: []
	W1209 04:36:56.670252 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:36:56.670257 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:36:56.670314 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:36:56.697566 1193189 cri.go:89] found id: ""
	I1209 04:36:56.697580 1193189 logs.go:282] 0 containers: []
	W1209 04:36:56.697586 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:36:56.697592 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:36:56.697650 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:36:56.726250 1193189 cri.go:89] found id: ""
	I1209 04:36:56.726264 1193189 logs.go:282] 0 containers: []
	W1209 04:36:56.726270 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:36:56.726278 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:36:56.726287 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:36:56.789536 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:36:56.789556 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:36:56.818317 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:36:56.818332 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:36:56.874653 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:36:56.874671 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:36:56.892967 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:36:56.892987 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:36:56.969870 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:36:56.961196   16367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:56.962227   16367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:56.964000   16367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:56.964364   16367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:56.965851   16367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:36:56.961196   16367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:56.962227   16367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:56.964000   16367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:56.964364   16367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:56.965851   16367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:36:59.470133 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:36:59.480193 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:36:59.480253 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:36:59.505288 1193189 cri.go:89] found id: ""
	I1209 04:36:59.505301 1193189 logs.go:282] 0 containers: []
	W1209 04:36:59.505308 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:36:59.505314 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:36:59.505375 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:36:59.530093 1193189 cri.go:89] found id: ""
	I1209 04:36:59.530108 1193189 logs.go:282] 0 containers: []
	W1209 04:36:59.530114 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:36:59.530120 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:36:59.530180 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:36:59.558857 1193189 cri.go:89] found id: ""
	I1209 04:36:59.558870 1193189 logs.go:282] 0 containers: []
	W1209 04:36:59.558877 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:36:59.558882 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:36:59.558939 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:36:59.587253 1193189 cri.go:89] found id: ""
	I1209 04:36:59.587267 1193189 logs.go:282] 0 containers: []
	W1209 04:36:59.587273 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:36:59.587278 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:36:59.587334 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:36:59.615574 1193189 cri.go:89] found id: ""
	I1209 04:36:59.615587 1193189 logs.go:282] 0 containers: []
	W1209 04:36:59.615594 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:36:59.615599 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:36:59.615661 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:36:59.640949 1193189 cri.go:89] found id: ""
	I1209 04:36:59.640963 1193189 logs.go:282] 0 containers: []
	W1209 04:36:59.640969 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:36:59.640975 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:36:59.641036 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:36:59.669059 1193189 cri.go:89] found id: ""
	I1209 04:36:59.669073 1193189 logs.go:282] 0 containers: []
	W1209 04:36:59.669079 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:36:59.669087 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:36:59.669099 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:36:59.728975 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:36:59.728993 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:36:59.746224 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:36:59.746240 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:36:59.811892 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:36:59.803565   16459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:59.804329   16459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:59.805884   16459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:59.806435   16459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:59.808154   16459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:36:59.803565   16459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:59.804329   16459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:59.805884   16459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:59.806435   16459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:59.808154   16459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:36:59.811908 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:36:59.811919 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:36:59.874287 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:36:59.874310 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:37:02.402643 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:37:02.413719 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:37:02.413785 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:37:02.440871 1193189 cri.go:89] found id: ""
	I1209 04:37:02.440885 1193189 logs.go:282] 0 containers: []
	W1209 04:37:02.440892 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:37:02.440897 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:37:02.440962 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:37:02.466112 1193189 cri.go:89] found id: ""
	I1209 04:37:02.466125 1193189 logs.go:282] 0 containers: []
	W1209 04:37:02.466132 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:37:02.466137 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:37:02.466195 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:37:02.491412 1193189 cri.go:89] found id: ""
	I1209 04:37:02.491426 1193189 logs.go:282] 0 containers: []
	W1209 04:37:02.491433 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:37:02.491438 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:37:02.491495 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:37:02.519036 1193189 cri.go:89] found id: ""
	I1209 04:37:02.519051 1193189 logs.go:282] 0 containers: []
	W1209 04:37:02.519058 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:37:02.519063 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:37:02.519126 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:37:02.547912 1193189 cri.go:89] found id: ""
	I1209 04:37:02.547927 1193189 logs.go:282] 0 containers: []
	W1209 04:37:02.547934 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:37:02.547939 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:37:02.548000 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:37:02.574804 1193189 cri.go:89] found id: ""
	I1209 04:37:02.574818 1193189 logs.go:282] 0 containers: []
	W1209 04:37:02.574826 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:37:02.574832 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:37:02.574910 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:37:02.598953 1193189 cri.go:89] found id: ""
	I1209 04:37:02.598967 1193189 logs.go:282] 0 containers: []
	W1209 04:37:02.598973 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:37:02.598981 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:37:02.598994 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:37:02.661273 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:37:02.661293 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:37:02.692376 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:37:02.692392 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:37:02.750097 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:37:02.750116 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:37:02.768673 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:37:02.768691 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:37:02.831464 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:37:02.822705   16576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:02.823490   16576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:02.825015   16576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:02.825561   16576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:02.827104   16576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:37:02.822705   16576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:02.823490   16576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:02.825015   16576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:02.825561   16576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:02.827104   16576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:37:05.331744 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:37:05.341534 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:37:05.341596 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:37:05.366255 1193189 cri.go:89] found id: ""
	I1209 04:37:05.366268 1193189 logs.go:282] 0 containers: []
	W1209 04:37:05.366275 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:37:05.366280 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:37:05.366339 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:37:05.391184 1193189 cri.go:89] found id: ""
	I1209 04:37:05.391198 1193189 logs.go:282] 0 containers: []
	W1209 04:37:05.391204 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:37:05.391211 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:37:05.391273 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:37:05.418240 1193189 cri.go:89] found id: ""
	I1209 04:37:05.418253 1193189 logs.go:282] 0 containers: []
	W1209 04:37:05.418259 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:37:05.418264 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:37:05.418327 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:37:05.442720 1193189 cri.go:89] found id: ""
	I1209 04:37:05.442734 1193189 logs.go:282] 0 containers: []
	W1209 04:37:05.442740 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:37:05.442746 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:37:05.442809 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:37:05.467915 1193189 cri.go:89] found id: ""
	I1209 04:37:05.467930 1193189 logs.go:282] 0 containers: []
	W1209 04:37:05.467937 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:37:05.467942 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:37:05.468009 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:37:05.491304 1193189 cri.go:89] found id: ""
	I1209 04:37:05.491318 1193189 logs.go:282] 0 containers: []
	W1209 04:37:05.491325 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:37:05.491330 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:37:05.491388 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:37:05.520597 1193189 cri.go:89] found id: ""
	I1209 04:37:05.520616 1193189 logs.go:282] 0 containers: []
	W1209 04:37:05.520623 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:37:05.520631 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:37:05.520642 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:37:05.577158 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:37:05.577177 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:37:05.593604 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:37:05.593620 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:37:05.661751 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:37:05.653767   16667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:05.654429   16667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:05.656081   16667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:05.656695   16667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:05.658088   16667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:37:05.653767   16667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:05.654429   16667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:05.656081   16667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:05.656695   16667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:05.658088   16667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:37:05.661761 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:37:05.661771 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:37:05.729846 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:37:05.729866 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:37:08.257598 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:37:08.267457 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:37:08.267520 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:37:08.295093 1193189 cri.go:89] found id: ""
	I1209 04:37:08.295107 1193189 logs.go:282] 0 containers: []
	W1209 04:37:08.295114 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:37:08.295119 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:37:08.295181 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:37:08.320140 1193189 cri.go:89] found id: ""
	I1209 04:37:08.320153 1193189 logs.go:282] 0 containers: []
	W1209 04:37:08.320160 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:37:08.320165 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:37:08.320233 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:37:08.344055 1193189 cri.go:89] found id: ""
	I1209 04:37:08.344069 1193189 logs.go:282] 0 containers: []
	W1209 04:37:08.344075 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:37:08.344081 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:37:08.344141 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:37:08.372791 1193189 cri.go:89] found id: ""
	I1209 04:37:08.372805 1193189 logs.go:282] 0 containers: []
	W1209 04:37:08.372811 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:37:08.372816 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:37:08.372874 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:37:08.396162 1193189 cri.go:89] found id: ""
	I1209 04:37:08.396175 1193189 logs.go:282] 0 containers: []
	W1209 04:37:08.396182 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:37:08.396187 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:37:08.396245 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:37:08.420733 1193189 cri.go:89] found id: ""
	I1209 04:37:08.420747 1193189 logs.go:282] 0 containers: []
	W1209 04:37:08.420755 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:37:08.420769 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:37:08.420830 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:37:08.444879 1193189 cri.go:89] found id: ""
	I1209 04:37:08.444894 1193189 logs.go:282] 0 containers: []
	W1209 04:37:08.444900 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:37:08.444918 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:37:08.444929 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:37:08.508132 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:37:08.499420   16764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:08.499882   16764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:08.501619   16764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:08.502150   16764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:08.503673   16764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:37:08.499420   16764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:08.499882   16764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:08.501619   16764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:08.502150   16764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:08.503673   16764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:37:08.508143 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:37:08.508156 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:37:08.570875 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:37:08.570900 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:37:08.602018 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:37:08.602034 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:37:08.663156 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:37:08.663174 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:37:11.180415 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:37:11.191088 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:37:11.191148 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:37:11.218679 1193189 cri.go:89] found id: ""
	I1209 04:37:11.218696 1193189 logs.go:282] 0 containers: []
	W1209 04:37:11.218703 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:37:11.218708 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:37:11.218766 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:37:11.253810 1193189 cri.go:89] found id: ""
	I1209 04:37:11.253842 1193189 logs.go:282] 0 containers: []
	W1209 04:37:11.253849 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:37:11.253855 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:37:11.253925 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:37:11.279585 1193189 cri.go:89] found id: ""
	I1209 04:37:11.279599 1193189 logs.go:282] 0 containers: []
	W1209 04:37:11.279605 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:37:11.279610 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:37:11.279668 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:37:11.303733 1193189 cri.go:89] found id: ""
	I1209 04:37:11.303747 1193189 logs.go:282] 0 containers: []
	W1209 04:37:11.303754 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:37:11.303759 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:37:11.303818 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:37:11.328678 1193189 cri.go:89] found id: ""
	I1209 04:37:11.328692 1193189 logs.go:282] 0 containers: []
	W1209 04:37:11.328699 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:37:11.328710 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:37:11.328768 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:37:11.352807 1193189 cri.go:89] found id: ""
	I1209 04:37:11.352830 1193189 logs.go:282] 0 containers: []
	W1209 04:37:11.352838 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:37:11.352843 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:37:11.352904 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:37:11.380926 1193189 cri.go:89] found id: ""
	I1209 04:37:11.380940 1193189 logs.go:282] 0 containers: []
	W1209 04:37:11.380946 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:37:11.380954 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:37:11.380964 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:37:11.443730 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:37:11.443751 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:37:11.471147 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:37:11.471163 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:37:11.528045 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:37:11.528068 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:37:11.545822 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:37:11.545839 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:37:11.612652 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:37:11.604231   16889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:11.604891   16889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:11.606570   16889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:11.607169   16889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:11.608878   16889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:37:11.604231   16889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:11.604891   16889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:11.606570   16889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:11.607169   16889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:11.608878   16889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:37:14.112937 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:37:14.123734 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:37:14.123791 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:37:14.149868 1193189 cri.go:89] found id: ""
	I1209 04:37:14.149884 1193189 logs.go:282] 0 containers: []
	W1209 04:37:14.149891 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:37:14.149897 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:37:14.149957 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:37:14.175575 1193189 cri.go:89] found id: ""
	I1209 04:37:14.175589 1193189 logs.go:282] 0 containers: []
	W1209 04:37:14.175595 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:37:14.175601 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:37:14.175665 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:37:14.202589 1193189 cri.go:89] found id: ""
	I1209 04:37:14.202615 1193189 logs.go:282] 0 containers: []
	W1209 04:37:14.202621 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:37:14.202627 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:37:14.202707 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:37:14.229085 1193189 cri.go:89] found id: ""
	I1209 04:37:14.229099 1193189 logs.go:282] 0 containers: []
	W1209 04:37:14.229109 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:37:14.229117 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:37:14.229183 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:37:14.254508 1193189 cri.go:89] found id: ""
	I1209 04:37:14.254522 1193189 logs.go:282] 0 containers: []
	W1209 04:37:14.254529 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:37:14.254534 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:37:14.254626 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:37:14.282967 1193189 cri.go:89] found id: ""
	I1209 04:37:14.282990 1193189 logs.go:282] 0 containers: []
	W1209 04:37:14.282997 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:37:14.283003 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:37:14.283072 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:37:14.307959 1193189 cri.go:89] found id: ""
	I1209 04:37:14.307973 1193189 logs.go:282] 0 containers: []
	W1209 04:37:14.307980 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:37:14.307988 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:37:14.307998 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:37:14.337297 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:37:14.337312 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:37:14.393504 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:37:14.393523 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:37:14.411720 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:37:14.411736 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:37:14.476754 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:37:14.469112   16989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:14.469506   16989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:14.470955   16989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:14.471259   16989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:14.472758   16989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:37:14.469112   16989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:14.469506   16989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:14.470955   16989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:14.471259   16989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:14.472758   16989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:37:14.476764 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:37:14.476775 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:37:17.039773 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:37:17.050019 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:37:17.050078 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:37:17.074811 1193189 cri.go:89] found id: ""
	I1209 04:37:17.074825 1193189 logs.go:282] 0 containers: []
	W1209 04:37:17.074841 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:37:17.074847 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:37:17.074928 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:37:17.098749 1193189 cri.go:89] found id: ""
	I1209 04:37:17.098763 1193189 logs.go:282] 0 containers: []
	W1209 04:37:17.098779 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:37:17.098784 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:37:17.098851 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:37:17.123314 1193189 cri.go:89] found id: ""
	I1209 04:37:17.123328 1193189 logs.go:282] 0 containers: []
	W1209 04:37:17.123334 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:37:17.123348 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:37:17.123404 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:37:17.148281 1193189 cri.go:89] found id: ""
	I1209 04:37:17.148304 1193189 logs.go:282] 0 containers: []
	W1209 04:37:17.148314 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:37:17.148319 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:37:17.148386 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:37:17.178459 1193189 cri.go:89] found id: ""
	I1209 04:37:17.178473 1193189 logs.go:282] 0 containers: []
	W1209 04:37:17.178480 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:37:17.178487 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:37:17.178545 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:37:17.214370 1193189 cri.go:89] found id: ""
	I1209 04:37:17.214383 1193189 logs.go:282] 0 containers: []
	W1209 04:37:17.214390 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:37:17.214395 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:37:17.214455 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:37:17.241547 1193189 cri.go:89] found id: ""
	I1209 04:37:17.241560 1193189 logs.go:282] 0 containers: []
	W1209 04:37:17.241567 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:37:17.241574 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:37:17.241584 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:37:17.300902 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:37:17.300920 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:37:17.318244 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:37:17.318260 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:37:17.379838 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:37:17.371574   17083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:17.372258   17083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:17.373943   17083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:17.374513   17083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:17.376103   17083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:37:17.371574   17083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:17.372258   17083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:17.373943   17083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:17.374513   17083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:17.376103   17083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:37:17.379865 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:37:17.379875 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:37:17.442204 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:37:17.442227 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:37:19.972933 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:37:19.982835 1193189 kubeadm.go:602] duration metric: took 4m3.833613801s to restartPrimaryControlPlane
	W1209 04:37:19.982896 1193189 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1209 04:37:19.982967 1193189 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1209 04:37:20.394224 1193189 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 04:37:20.407222 1193189 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 04:37:20.415043 1193189 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1209 04:37:20.415096 1193189 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 04:37:20.422447 1193189 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 04:37:20.422458 1193189 kubeadm.go:158] found existing configuration files:
	
	I1209 04:37:20.422511 1193189 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1209 04:37:20.429958 1193189 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 04:37:20.430020 1193189 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 04:37:20.437087 1193189 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1209 04:37:20.444177 1193189 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 04:37:20.444229 1193189 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 04:37:20.451583 1193189 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1209 04:37:20.459107 1193189 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 04:37:20.459158 1193189 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 04:37:20.466013 1193189 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1209 04:37:20.473265 1193189 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 04:37:20.473320 1193189 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 04:37:20.480362 1193189 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1209 04:37:20.591599 1193189 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1209 04:37:20.592032 1193189 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1209 04:37:20.651935 1193189 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1209 04:41:22.764150 1193189 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1209 04:41:22.764175 1193189 kubeadm.go:319] 
	I1209 04:41:22.764241 1193189 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1209 04:41:22.768309 1193189 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1209 04:41:22.768359 1193189 kubeadm.go:319] [preflight] Running pre-flight checks
	I1209 04:41:22.768442 1193189 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1209 04:41:22.768497 1193189 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1209 04:41:22.768531 1193189 kubeadm.go:319] OS: Linux
	I1209 04:41:22.768594 1193189 kubeadm.go:319] CGROUPS_CPU: enabled
	I1209 04:41:22.768653 1193189 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1209 04:41:22.768699 1193189 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1209 04:41:22.768746 1193189 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1209 04:41:22.768792 1193189 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1209 04:41:22.768840 1193189 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1209 04:41:22.768883 1193189 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1209 04:41:22.768930 1193189 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1209 04:41:22.768975 1193189 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1209 04:41:22.769046 1193189 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1209 04:41:22.769140 1193189 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1209 04:41:22.769229 1193189 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1209 04:41:22.769290 1193189 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1209 04:41:22.772269 1193189 out.go:252]   - Generating certificates and keys ...
	I1209 04:41:22.772365 1193189 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1209 04:41:22.772442 1193189 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1209 04:41:22.772517 1193189 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1209 04:41:22.772582 1193189 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1209 04:41:22.772651 1193189 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1209 04:41:22.772740 1193189 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1209 04:41:22.772808 1193189 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1209 04:41:22.772883 1193189 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1209 04:41:22.772975 1193189 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1209 04:41:22.773069 1193189 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1209 04:41:22.773105 1193189 kubeadm.go:319] [certs] Using the existing "sa" key
	I1209 04:41:22.773160 1193189 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1209 04:41:22.773215 1193189 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1209 04:41:22.773279 1193189 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1209 04:41:22.773333 1193189 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1209 04:41:22.773401 1193189 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1209 04:41:22.773459 1193189 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1209 04:41:22.773544 1193189 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1209 04:41:22.773604 1193189 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1209 04:41:22.778452 1193189 out.go:252]   - Booting up control plane ...
	I1209 04:41:22.778558 1193189 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1209 04:41:22.778636 1193189 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1209 04:41:22.778708 1193189 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1209 04:41:22.778830 1193189 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1209 04:41:22.778931 1193189 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1209 04:41:22.779034 1193189 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1209 04:41:22.779165 1193189 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1209 04:41:22.779213 1193189 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1209 04:41:22.779347 1193189 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1209 04:41:22.779447 1193189 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1209 04:41:22.779507 1193189 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001187798s
	I1209 04:41:22.779509 1193189 kubeadm.go:319] 
	I1209 04:41:22.779562 1193189 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1209 04:41:22.779605 1193189 kubeadm.go:319] 	- The kubelet is not running
	I1209 04:41:22.779728 1193189 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1209 04:41:22.779731 1193189 kubeadm.go:319] 
	I1209 04:41:22.779842 1193189 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1209 04:41:22.779891 1193189 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1209 04:41:22.779919 1193189 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1209 04:41:22.779932 1193189 kubeadm.go:319] 
	W1209 04:41:22.780053 1193189 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001187798s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1209 04:41:22.780164 1193189 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1209 04:41:23.192047 1193189 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 04:41:23.205020 1193189 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1209 04:41:23.205076 1193189 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 04:41:23.212555 1193189 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 04:41:23.212563 1193189 kubeadm.go:158] found existing configuration files:
	
	I1209 04:41:23.212616 1193189 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1209 04:41:23.220135 1193189 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 04:41:23.220190 1193189 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 04:41:23.227342 1193189 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1209 04:41:23.234934 1193189 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 04:41:23.234988 1193189 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 04:41:23.242413 1193189 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1209 04:41:23.249859 1193189 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 04:41:23.249916 1193189 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 04:41:23.257497 1193189 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1209 04:41:23.264938 1193189 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 04:41:23.264993 1193189 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 04:41:23.272287 1193189 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1209 04:41:23.315971 1193189 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1209 04:41:23.316329 1193189 kubeadm.go:319] [preflight] Running pre-flight checks
	I1209 04:41:23.386479 1193189 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1209 04:41:23.386543 1193189 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1209 04:41:23.386577 1193189 kubeadm.go:319] OS: Linux
	I1209 04:41:23.386622 1193189 kubeadm.go:319] CGROUPS_CPU: enabled
	I1209 04:41:23.386669 1193189 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1209 04:41:23.386716 1193189 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1209 04:41:23.386763 1193189 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1209 04:41:23.386810 1193189 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1209 04:41:23.386857 1193189 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1209 04:41:23.386901 1193189 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1209 04:41:23.386948 1193189 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1209 04:41:23.386993 1193189 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1209 04:41:23.459528 1193189 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1209 04:41:23.459630 1193189 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1209 04:41:23.459719 1193189 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1209 04:41:23.465017 1193189 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1209 04:41:23.470401 1193189 out.go:252]   - Generating certificates and keys ...
	I1209 04:41:23.470490 1193189 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1209 04:41:23.470556 1193189 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1209 04:41:23.470655 1193189 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1209 04:41:23.470730 1193189 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1209 04:41:23.470799 1193189 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1209 04:41:23.470852 1193189 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1209 04:41:23.470919 1193189 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1209 04:41:23.470980 1193189 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1209 04:41:23.471052 1193189 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1209 04:41:23.471123 1193189 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1209 04:41:23.471160 1193189 kubeadm.go:319] [certs] Using the existing "sa" key
	I1209 04:41:23.471222 1193189 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1209 04:41:23.897547 1193189 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1209 04:41:24.071180 1193189 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1209 04:41:24.419266 1193189 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1209 04:41:24.580042 1193189 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1209 04:41:25.012112 1193189 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1209 04:41:25.012658 1193189 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1209 04:41:25.015310 1193189 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1209 04:41:25.018776 1193189 out.go:252]   - Booting up control plane ...
	I1209 04:41:25.018875 1193189 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1209 04:41:25.018952 1193189 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1209 04:41:25.019019 1193189 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1209 04:41:25.039820 1193189 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1209 04:41:25.039928 1193189 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1209 04:41:25.047252 1193189 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1209 04:41:25.047955 1193189 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1209 04:41:25.048349 1193189 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1209 04:41:25.184171 1193189 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1209 04:41:25.184286 1193189 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1209 04:45:25.184394 1193189 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000314916s
	I1209 04:45:25.184418 1193189 kubeadm.go:319] 
	I1209 04:45:25.184509 1193189 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1209 04:45:25.184553 1193189 kubeadm.go:319] 	- The kubelet is not running
	I1209 04:45:25.184657 1193189 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1209 04:45:25.184661 1193189 kubeadm.go:319] 
	I1209 04:45:25.184765 1193189 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1209 04:45:25.184796 1193189 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1209 04:45:25.184826 1193189 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1209 04:45:25.184829 1193189 kubeadm.go:319] 
	I1209 04:45:25.188658 1193189 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1209 04:45:25.189080 1193189 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1209 04:45:25.189188 1193189 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1209 04:45:25.189440 1193189 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1209 04:45:25.189444 1193189 kubeadm.go:319] 
	I1209 04:45:25.189512 1193189 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1209 04:45:25.189563 1193189 kubeadm.go:403] duration metric: took 12m9.073031305s to StartCluster
	I1209 04:45:25.189594 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:45:25.189654 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:45:25.214653 1193189 cri.go:89] found id: ""
	I1209 04:45:25.214667 1193189 logs.go:282] 0 containers: []
	W1209 04:45:25.214674 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:45:25.214680 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:45:25.214745 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:45:25.239781 1193189 cri.go:89] found id: ""
	I1209 04:45:25.239795 1193189 logs.go:282] 0 containers: []
	W1209 04:45:25.239802 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:45:25.239806 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:45:25.239865 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:45:25.263923 1193189 cri.go:89] found id: ""
	I1209 04:45:25.263937 1193189 logs.go:282] 0 containers: []
	W1209 04:45:25.263943 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:45:25.263949 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:45:25.264009 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:45:25.289497 1193189 cri.go:89] found id: ""
	I1209 04:45:25.289510 1193189 logs.go:282] 0 containers: []
	W1209 04:45:25.289521 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:45:25.289527 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:45:25.289587 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:45:25.314477 1193189 cri.go:89] found id: ""
	I1209 04:45:25.314491 1193189 logs.go:282] 0 containers: []
	W1209 04:45:25.314497 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:45:25.314502 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:45:25.314564 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:45:25.343027 1193189 cri.go:89] found id: ""
	I1209 04:45:25.343041 1193189 logs.go:282] 0 containers: []
	W1209 04:45:25.343048 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:45:25.343054 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:45:25.343116 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:45:25.372137 1193189 cri.go:89] found id: ""
	I1209 04:45:25.372151 1193189 logs.go:282] 0 containers: []
	W1209 04:45:25.372158 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:45:25.372166 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:45:25.372175 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:45:25.430985 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:45:25.431004 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:45:25.448709 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:45:25.448726 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:45:25.515693 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:45:25.506884   20840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:25.507687   20840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:25.509338   20840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:25.509652   20840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:25.511142   20840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:45:25.506884   20840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:25.507687   20840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:25.509338   20840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:25.509652   20840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:25.511142   20840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:45:25.515704 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:45:25.515716 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:45:25.578666 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:45:25.578686 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1209 04:45:25.609638 1193189 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000314916s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1209 04:45:25.609683 1193189 out.go:285] * 
	W1209 04:45:25.609743 1193189 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000314916s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1209 04:45:25.609756 1193189 out.go:285] * 
	W1209 04:45:25.611848 1193189 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 04:45:25.617063 1193189 out.go:203] 
	W1209 04:45:25.620790 1193189 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000314916s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1209 04:45:25.620840 1193189 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1209 04:45:25.620858 1193189 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1209 04:45:25.624102 1193189 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 09 04:33:14 functional-667319 containerd[9667]: time="2025-12-09T04:33:14.466187116Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 09 04:33:14 functional-667319 containerd[9667]: time="2025-12-09T04:33:14.466260623Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 09 04:33:14 functional-667319 containerd[9667]: time="2025-12-09T04:33:14.466354134Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 09 04:33:14 functional-667319 containerd[9667]: time="2025-12-09T04:33:14.466435125Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 09 04:33:14 functional-667319 containerd[9667]: time="2025-12-09T04:33:14.466499213Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 09 04:33:14 functional-667319 containerd[9667]: time="2025-12-09T04:33:14.466598591Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 09 04:33:14 functional-667319 containerd[9667]: time="2025-12-09T04:33:14.466657461Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 09 04:33:14 functional-667319 containerd[9667]: time="2025-12-09T04:33:14.466714427Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 09 04:33:14 functional-667319 containerd[9667]: time="2025-12-09T04:33:14.466779869Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 09 04:33:14 functional-667319 containerd[9667]: time="2025-12-09T04:33:14.466859104Z" level=info msg="Connect containerd service"
	Dec 09 04:33:14 functional-667319 containerd[9667]: time="2025-12-09T04:33:14.467196932Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 09 04:33:14 functional-667319 containerd[9667]: time="2025-12-09T04:33:14.467824983Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 09 04:33:14 functional-667319 containerd[9667]: time="2025-12-09T04:33:14.483453604Z" level=info msg="Start subscribing containerd event"
	Dec 09 04:33:14 functional-667319 containerd[9667]: time="2025-12-09T04:33:14.484090811Z" level=info msg="Start recovering state"
	Dec 09 04:33:14 functional-667319 containerd[9667]: time="2025-12-09T04:33:14.483855889Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 09 04:33:14 functional-667319 containerd[9667]: time="2025-12-09T04:33:14.486351191Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 09 04:33:14 functional-667319 containerd[9667]: time="2025-12-09T04:33:14.525478311Z" level=info msg="Start event monitor"
	Dec 09 04:33:14 functional-667319 containerd[9667]: time="2025-12-09T04:33:14.525531922Z" level=info msg="Start cni network conf syncer for default"
	Dec 09 04:33:14 functional-667319 containerd[9667]: time="2025-12-09T04:33:14.525542924Z" level=info msg="Start streaming server"
	Dec 09 04:33:14 functional-667319 containerd[9667]: time="2025-12-09T04:33:14.525552738Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 09 04:33:14 functional-667319 containerd[9667]: time="2025-12-09T04:33:14.525560959Z" level=info msg="runtime interface starting up..."
	Dec 09 04:33:14 functional-667319 containerd[9667]: time="2025-12-09T04:33:14.525567629Z" level=info msg="starting plugins..."
	Dec 09 04:33:14 functional-667319 containerd[9667]: time="2025-12-09T04:33:14.525580897Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 09 04:33:14 functional-667319 containerd[9667]: time="2025-12-09T04:33:14.525716006Z" level=info msg="containerd successfully booted in 0.083289s"
	Dec 09 04:33:14 functional-667319 systemd[1]: Started containerd.service - containerd container runtime.
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:45:28.965976   21096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:28.966417   21096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:28.967957   21096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:28.968458   21096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:28.969915   21096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 9 03:13] overlayfs: idmapped layers are currently not supported
	[ +25.904254] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:14] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:16] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:18] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:19] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:30] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:32] overlayfs: idmapped layers are currently not supported
	[ +28.114653] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:33] overlayfs: idmapped layers are currently not supported
	[ +23.720849] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:34] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:35] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:36] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:37] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:38] overlayfs: idmapped layers are currently not supported
	[ +23.656275] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:39] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:57] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:58] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:00] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:02] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:03] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:05] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:06] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 04:45:29 up  7:27,  0 user,  load average: 0.30, 0.22, 0.48
	Linux functional-667319 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 09 04:45:25 functional-667319 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 09 04:45:26 functional-667319 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 09 04:45:26 functional-667319 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:45:26 functional-667319 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:45:26 functional-667319 kubelet[20873]: E1209 04:45:26.247162   20873 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 09 04:45:26 functional-667319 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 09 04:45:26 functional-667319 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 09 04:45:26 functional-667319 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Dec 09 04:45:26 functional-667319 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:45:26 functional-667319 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:45:27 functional-667319 kubelet[20971]: E1209 04:45:27.012322   20971 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 09 04:45:27 functional-667319 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 09 04:45:27 functional-667319 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 09 04:45:27 functional-667319 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 323.
	Dec 09 04:45:27 functional-667319 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:45:27 functional-667319 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:45:27 functional-667319 kubelet[20989]: E1209 04:45:27.712636   20989 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 09 04:45:27 functional-667319 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 09 04:45:27 functional-667319 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 09 04:45:28 functional-667319 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 324.
	Dec 09 04:45:28 functional-667319 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:45:28 functional-667319 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:45:28 functional-667319 kubelet[21012]: E1209 04:45:28.488960   21012 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 09 04:45:28 functional-667319 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 09 04:45:28 functional-667319 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-667319 -n functional-667319
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-667319 -n functional-667319: exit status 2 (332.248592ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-667319" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (2.08s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-667319 apply -f testdata/invalidsvc.yaml
functional_test.go:2326: (dbg) Non-zero exit: kubectl --context functional-667319 apply -f testdata/invalidsvc.yaml: exit status 1 (60.054764ms)

                                                
                                                
** stderr ** 
	error: error validating "testdata/invalidsvc.yaml": error validating data: failed to download openapi: Get "https://192.168.49.2:8441/openapi/v2?timeout=32s": dial tcp 192.168.49.2:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false

                                                
                                                
** /stderr **
functional_test.go:2328: kubectl --context functional-667319 apply -f testdata/invalidsvc.yaml failed: exit status 1
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (1.76s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-667319 --alsologtostderr -v=1]
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-667319 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-667319 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-667319 --alsologtostderr -v=1] stderr:
I1209 04:47:34.529357 1211979 out.go:360] Setting OutFile to fd 1 ...
I1209 04:47:34.529526 1211979 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1209 04:47:34.529538 1211979 out.go:374] Setting ErrFile to fd 2...
I1209 04:47:34.529544 1211979 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1209 04:47:34.529800 1211979 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-1142328/.minikube/bin
I1209 04:47:34.530069 1211979 mustload.go:66] Loading cluster: functional-667319
I1209 04:47:34.530527 1211979 config.go:182] Loaded profile config "functional-667319": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1209 04:47:34.530999 1211979 cli_runner.go:164] Run: docker container inspect functional-667319 --format={{.State.Status}}
I1209 04:47:34.548449 1211979 host.go:66] Checking if "functional-667319" exists ...
I1209 04:47:34.548780 1211979 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1209 04:47:34.607981 1211979 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-09 04:47:34.597259725 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1209 04:47:34.608115 1211979 api_server.go:166] Checking apiserver status ...
I1209 04:47:34.608177 1211979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1209 04:47:34.608223 1211979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-667319
I1209 04:47:34.625153 1211979 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/functional-667319/id_rsa Username:docker}
W1209 04:47:34.734363 1211979 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1209 04:47:34.737518 1211979 out.go:179] * The control-plane node functional-667319 apiserver is not running: (state=Stopped)
I1209 04:47:34.740232 1211979 out.go:179]   To start a cluster, run: "minikube start -p functional-667319"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-667319
helpers_test.go:243: (dbg) docker inspect functional-667319:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e5b6511799c8d5c445a335a3bd5cc9a61b518fc27ac93dad8800da366ef32129",
	        "Created": "2025-12-09T04:18:34.060957311Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1182075,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-09T04:18:34.126944158Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:e4eb91ed18a24161fce60c7cdd660144ecd5b8c5029dc2dea2c5e423c2f48ce4",
	        "ResolvConfPath": "/var/lib/docker/containers/e5b6511799c8d5c445a335a3bd5cc9a61b518fc27ac93dad8800da366ef32129/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e5b6511799c8d5c445a335a3bd5cc9a61b518fc27ac93dad8800da366ef32129/hostname",
	        "HostsPath": "/var/lib/docker/containers/e5b6511799c8d5c445a335a3bd5cc9a61b518fc27ac93dad8800da366ef32129/hosts",
	        "LogPath": "/var/lib/docker/containers/e5b6511799c8d5c445a335a3bd5cc9a61b518fc27ac93dad8800da366ef32129/e5b6511799c8d5c445a335a3bd5cc9a61b518fc27ac93dad8800da366ef32129-json.log",
	        "Name": "/functional-667319",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-667319:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-667319",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e5b6511799c8d5c445a335a3bd5cc9a61b518fc27ac93dad8800da366ef32129",
	                "LowerDir": "/var/lib/docker/overlay2/b0239006282b6e4609a1f554d0a3fb94c749a13505795c8e4078cb2db194e8e0-init/diff:/var/lib/docker/overlay2/c44bb57aa59cc265266f37f2bb6e7ec0e7d641c3b4aeaa57e6d23deec6f0d1d4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b0239006282b6e4609a1f554d0a3fb94c749a13505795c8e4078cb2db194e8e0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b0239006282b6e4609a1f554d0a3fb94c749a13505795c8e4078cb2db194e8e0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b0239006282b6e4609a1f554d0a3fb94c749a13505795c8e4078cb2db194e8e0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-667319",
	                "Source": "/var/lib/docker/volumes/functional-667319/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-667319",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-667319",
	                "name.minikube.sigs.k8s.io": "functional-667319",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7c81dabcd9e57af9bce0bc0f5619f6ef3a27af43f4b649283a5bd778ab256415",
	            "SandboxKey": "/var/run/docker/netns/7c81dabcd9e5",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33900"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33901"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33904"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33902"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33903"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-667319": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "fe:40:bd:46:56:d8",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "88b3a65de70c15005c532a44219284d4df94e474ca5b78b04514c2f932b03beb",
	                    "EndpointID": "bdef7b156f4a28c1f641ae70b42db2750bb810ae6fe93fd65325e62eb232fe91",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-667319",
	                        "e5b6511799c8"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-667319 -n functional-667319
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-667319 -n functional-667319: exit status 2 (305.83136ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-667319 logs -n 25
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	┌───────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND  │                                                                        ARGS                                                                         │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├───────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh       │ functional-667319 ssh sudo umount -f /mount-9p                                                                                                      │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:45 UTC │ 09 Dec 25 04:45 UTC │
	│ ssh       │ functional-667319 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:45 UTC │                     │
	│ mount     │ -p functional-667319 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo4027315530/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:45 UTC │                     │
	│ ssh       │ functional-667319 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:45 UTC │ 09 Dec 25 04:45 UTC │
	│ ssh       │ functional-667319 ssh -- ls -la /mount-9p                                                                                                           │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:45 UTC │ 09 Dec 25 04:45 UTC │
	│ ssh       │ functional-667319 ssh sudo umount -f /mount-9p                                                                                                      │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:45 UTC │                     │
	│ mount     │ -p functional-667319 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3101865674/001:/mount1 --alsologtostderr -v=1                │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:45 UTC │                     │
	│ mount     │ -p functional-667319 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3101865674/001:/mount2 --alsologtostderr -v=1                │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:45 UTC │                     │
	│ ssh       │ functional-667319 ssh findmnt -T /mount1                                                                                                            │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:45 UTC │                     │
	│ mount     │ -p functional-667319 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3101865674/001:/mount3 --alsologtostderr -v=1                │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:45 UTC │                     │
	│ ssh       │ functional-667319 ssh findmnt -T /mount1                                                                                                            │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:45 UTC │ 09 Dec 25 04:45 UTC │
	│ ssh       │ functional-667319 ssh findmnt -T /mount2                                                                                                            │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:45 UTC │ 09 Dec 25 04:45 UTC │
	│ ssh       │ functional-667319 ssh findmnt -T /mount3                                                                                                            │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:45 UTC │ 09 Dec 25 04:45 UTC │
	│ mount     │ -p functional-667319 --kill=true                                                                                                                    │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:45 UTC │                     │
	│ addons    │ functional-667319 addons list                                                                                                                       │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:47 UTC │ 09 Dec 25 04:47 UTC │
	│ addons    │ functional-667319 addons list -o json                                                                                                               │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:47 UTC │ 09 Dec 25 04:47 UTC │
	│ service   │ functional-667319 service list                                                                                                                      │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:47 UTC │                     │
	│ service   │ functional-667319 service list -o json                                                                                                              │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:47 UTC │                     │
	│ service   │ functional-667319 service --namespace=default --https --url hello-node                                                                              │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:47 UTC │                     │
	│ service   │ functional-667319 service hello-node --url --format={{.IP}}                                                                                         │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:47 UTC │                     │
	│ service   │ functional-667319 service hello-node --url                                                                                                          │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:47 UTC │                     │
	│ start     │ -p functional-667319 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:47 UTC │                     │
	│ start     │ -p functional-667319 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0           │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:47 UTC │                     │
	│ start     │ -p functional-667319 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:47 UTC │                     │
	│ dashboard │ --url --port 36195 -p functional-667319 --alsologtostderr -v=1                                                                                      │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:47 UTC │                     │
	└───────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/09 04:47:34
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 04:47:34.321713 1211929 out.go:360] Setting OutFile to fd 1 ...
	I1209 04:47:34.321929 1211929 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 04:47:34.321959 1211929 out.go:374] Setting ErrFile to fd 2...
	I1209 04:47:34.321979 1211929 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 04:47:34.322391 1211929 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-1142328/.minikube/bin
	I1209 04:47:34.322830 1211929 out.go:368] Setting JSON to false
	I1209 04:47:34.323745 1211929 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":26978,"bootTime":1765228677,"procs":158,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1209 04:47:34.323845 1211929 start.go:143] virtualization:  
	I1209 04:47:34.327171 1211929 out.go:179] * [functional-667319] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1209 04:47:34.331051 1211929 notify.go:221] Checking for updates...
	I1209 04:47:34.331388 1211929 out.go:179]   - MINIKUBE_LOCATION=22081
	I1209 04:47:34.334585 1211929 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 04:47:34.337483 1211929 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22081-1142328/kubeconfig
	I1209 04:47:34.340345 1211929 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-1142328/.minikube
	I1209 04:47:34.343178 1211929 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1209 04:47:34.346040 1211929 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 04:47:34.349423 1211929 config.go:182] Loaded profile config "functional-667319": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1209 04:47:34.349977 1211929 driver.go:422] Setting default libvirt URI to qemu:///system
	I1209 04:47:34.375111 1211929 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1209 04:47:34.375225 1211929 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 04:47:34.453332 1211929 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-09 04:47:34.434527916 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1209 04:47:34.453454 1211929 docker.go:319] overlay module found
	I1209 04:47:34.456344 1211929 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1209 04:47:34.459148 1211929 start.go:309] selected driver: docker
	I1209 04:47:34.459167 1211929 start.go:927] validating driver "docker" against &{Name:functional-667319 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-667319 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 04:47:34.459255 1211929 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 04:47:34.462988 1211929 out.go:203] 
	W1209 04:47:34.465967 1211929 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1209 04:47:34.469187 1211929 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 09 04:45:34 functional-667319 containerd[9667]: time="2025-12-09T04:45:34.304901430Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-667319\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 09 04:45:35 functional-667319 containerd[9667]: time="2025-12-09T04:45:35.122240132Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-667319\""
	Dec 09 04:45:35 functional-667319 containerd[9667]: time="2025-12-09T04:45:35.124969737Z" level=info msg="ImageDelete event name:\"docker.io/kicbase/echo-server:functional-667319\""
	Dec 09 04:45:35 functional-667319 containerd[9667]: time="2025-12-09T04:45:35.127379711Z" level=info msg="ImageDelete event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\""
	Dec 09 04:45:35 functional-667319 containerd[9667]: time="2025-12-09T04:45:35.136337700Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-667319\" returns successfully"
	Dec 09 04:45:35 functional-667319 containerd[9667]: time="2025-12-09T04:45:35.369054172Z" level=info msg="No images store for sha256:dd3309dec5df27eec01ab59220514c77e78d9b5409234aefaeee1c6a1c609658"
	Dec 09 04:45:35 functional-667319 containerd[9667]: time="2025-12-09T04:45:35.371319041Z" level=info msg="ImageCreate event name:\"docker.io/kicbase/echo-server:functional-667319\""
	Dec 09 04:45:35 functional-667319 containerd[9667]: time="2025-12-09T04:45:35.378181758Z" level=info msg="ImageCreate event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 09 04:45:35 functional-667319 containerd[9667]: time="2025-12-09T04:45:35.378777478Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-667319\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 09 04:45:36 functional-667319 containerd[9667]: time="2025-12-09T04:45:36.438263871Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-667319\""
	Dec 09 04:45:36 functional-667319 containerd[9667]: time="2025-12-09T04:45:36.441243432Z" level=info msg="ImageDelete event name:\"docker.io/kicbase/echo-server:functional-667319\""
	Dec 09 04:45:36 functional-667319 containerd[9667]: time="2025-12-09T04:45:36.443329590Z" level=info msg="ImageDelete event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\""
	Dec 09 04:45:36 functional-667319 containerd[9667]: time="2025-12-09T04:45:36.451953736Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-667319\" returns successfully"
	Dec 09 04:45:36 functional-667319 containerd[9667]: time="2025-12-09T04:45:36.699722091Z" level=info msg="No images store for sha256:dd3309dec5df27eec01ab59220514c77e78d9b5409234aefaeee1c6a1c609658"
	Dec 09 04:45:36 functional-667319 containerd[9667]: time="2025-12-09T04:45:36.702120561Z" level=info msg="ImageCreate event name:\"docker.io/kicbase/echo-server:functional-667319\""
	Dec 09 04:45:36 functional-667319 containerd[9667]: time="2025-12-09T04:45:36.709091689Z" level=info msg="ImageCreate event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 09 04:45:36 functional-667319 containerd[9667]: time="2025-12-09T04:45:36.709423744Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-667319\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 09 04:45:37 functional-667319 containerd[9667]: time="2025-12-09T04:45:37.473128393Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-667319\""
	Dec 09 04:45:37 functional-667319 containerd[9667]: time="2025-12-09T04:45:37.475631173Z" level=info msg="ImageDelete event name:\"docker.io/kicbase/echo-server:functional-667319\""
	Dec 09 04:45:37 functional-667319 containerd[9667]: time="2025-12-09T04:45:37.477592221Z" level=info msg="ImageDelete event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\""
	Dec 09 04:45:37 functional-667319 containerd[9667]: time="2025-12-09T04:45:37.490276659Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-667319\" returns successfully"
	Dec 09 04:45:38 functional-667319 containerd[9667]: time="2025-12-09T04:45:38.148598616Z" level=info msg="No images store for sha256:904ceb29077e75bbca4483a04b0d4e97cdb7c2e3a6b6f3f1bb70ace08229b0b3"
	Dec 09 04:45:38 functional-667319 containerd[9667]: time="2025-12-09T04:45:38.150763877Z" level=info msg="ImageCreate event name:\"docker.io/kicbase/echo-server:functional-667319\""
	Dec 09 04:45:38 functional-667319 containerd[9667]: time="2025-12-09T04:45:38.160850013Z" level=info msg="ImageCreate event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 09 04:45:38 functional-667319 containerd[9667]: time="2025-12-09T04:45:38.161468872Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-667319\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:47:35.782967   23723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:47:35.783620   23723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:47:35.785288   23723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:47:35.785712   23723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:47:35.787315   23723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 9 03:13] overlayfs: idmapped layers are currently not supported
	[ +25.904254] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:14] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:16] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:18] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:19] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:30] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:32] overlayfs: idmapped layers are currently not supported
	[ +28.114653] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:33] overlayfs: idmapped layers are currently not supported
	[ +23.720849] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:34] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:35] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:36] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:37] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:38] overlayfs: idmapped layers are currently not supported
	[ +23.656275] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:39] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:57] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:58] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:00] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:02] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:03] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:05] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:06] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 04:47:35 up  7:29,  0 user,  load average: 0.65, 0.32, 0.48
	Linux functional-667319 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 09 04:47:32 functional-667319 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 09 04:47:32 functional-667319 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 490.
	Dec 09 04:47:32 functional-667319 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:47:32 functional-667319 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:47:32 functional-667319 kubelet[23468]: E1209 04:47:32.904210   23468 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 09 04:47:32 functional-667319 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 09 04:47:32 functional-667319 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 09 04:47:33 functional-667319 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 491.
	Dec 09 04:47:33 functional-667319 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:47:33 functional-667319 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:47:33 functional-667319 kubelet[23546]: E1209 04:47:33.753162   23546 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 09 04:47:33 functional-667319 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 09 04:47:33 functional-667319 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 09 04:47:34 functional-667319 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 492.
	Dec 09 04:47:34 functional-667319 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:47:34 functional-667319 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:47:34 functional-667319 kubelet[23606]: E1209 04:47:34.515065   23606 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 09 04:47:34 functional-667319 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 09 04:47:34 functional-667319 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 09 04:47:35 functional-667319 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 493.
	Dec 09 04:47:35 functional-667319 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:47:35 functional-667319 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:47:35 functional-667319 kubelet[23636]: E1209 04:47:35.246153   23636 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 09 04:47:35 functional-667319 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 09 04:47:35 functional-667319 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-667319 -n functional-667319
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-667319 -n functional-667319: exit status 2 (360.483218ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-667319" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (1.76s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (2.27s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-667319 status
functional_test.go:869: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-667319 status: exit status 2 (322.671854ms)

                                                
                                                
-- stdout --
	functional-667319
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Configured
	

                                                
                                                
-- /stdout --
functional_test.go:871: failed to run minikube status. args "out/minikube-linux-arm64 -p functional-667319 status" : exit status 2
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-667319 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:875: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-667319 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: exit status 2 (298.14052ms)

                                                
                                                
-- stdout --
	host:Running,kublet:Stopped,apiserver:Stopped,kubeconfig:Configured

                                                
                                                
-- /stdout --
functional_test.go:877: failed to run minikube status with custom format: args "out/minikube-linux-arm64 -p functional-667319 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}": exit status 2
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-667319 status -o json
functional_test.go:887: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-667319 status -o json: exit status 2 (322.132199ms)

                                                
                                                
-- stdout --
	{"Name":"functional-667319","Host":"Running","Kubelet":"Running","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
functional_test.go:889: failed to run minikube status with json output. args "out/minikube-linux-arm64 -p functional-667319 status -o json" : exit status 2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-667319
helpers_test.go:243: (dbg) docker inspect functional-667319:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e5b6511799c8d5c445a335a3bd5cc9a61b518fc27ac93dad8800da366ef32129",
	        "Created": "2025-12-09T04:18:34.060957311Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1182075,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-09T04:18:34.126944158Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:e4eb91ed18a24161fce60c7cdd660144ecd5b8c5029dc2dea2c5e423c2f48ce4",
	        "ResolvConfPath": "/var/lib/docker/containers/e5b6511799c8d5c445a335a3bd5cc9a61b518fc27ac93dad8800da366ef32129/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e5b6511799c8d5c445a335a3bd5cc9a61b518fc27ac93dad8800da366ef32129/hostname",
	        "HostsPath": "/var/lib/docker/containers/e5b6511799c8d5c445a335a3bd5cc9a61b518fc27ac93dad8800da366ef32129/hosts",
	        "LogPath": "/var/lib/docker/containers/e5b6511799c8d5c445a335a3bd5cc9a61b518fc27ac93dad8800da366ef32129/e5b6511799c8d5c445a335a3bd5cc9a61b518fc27ac93dad8800da366ef32129-json.log",
	        "Name": "/functional-667319",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-667319:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-667319",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e5b6511799c8d5c445a335a3bd5cc9a61b518fc27ac93dad8800da366ef32129",
	                "LowerDir": "/var/lib/docker/overlay2/b0239006282b6e4609a1f554d0a3fb94c749a13505795c8e4078cb2db194e8e0-init/diff:/var/lib/docker/overlay2/c44bb57aa59cc265266f37f2bb6e7ec0e7d641c3b4aeaa57e6d23deec6f0d1d4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b0239006282b6e4609a1f554d0a3fb94c749a13505795c8e4078cb2db194e8e0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b0239006282b6e4609a1f554d0a3fb94c749a13505795c8e4078cb2db194e8e0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b0239006282b6e4609a1f554d0a3fb94c749a13505795c8e4078cb2db194e8e0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-667319",
	                "Source": "/var/lib/docker/volumes/functional-667319/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-667319",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-667319",
	                "name.minikube.sigs.k8s.io": "functional-667319",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7c81dabcd9e57af9bce0bc0f5619f6ef3a27af43f4b649283a5bd778ab256415",
	            "SandboxKey": "/var/run/docker/netns/7c81dabcd9e5",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33900"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33901"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33904"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33902"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33903"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-667319": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "fe:40:bd:46:56:d8",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "88b3a65de70c15005c532a44219284d4df94e474ca5b78b04514c2f932b03beb",
	                    "EndpointID": "bdef7b156f4a28c1f641ae70b42db2750bb810ae6fe93fd65325e62eb232fe91",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-667319",
	                        "e5b6511799c8"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-667319 -n functional-667319
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-667319 -n functional-667319: exit status 2 (300.391887ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-667319 logs -n 25
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                        ARGS                                                                         │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ functional-667319 ssh cat /mount-9p/test-1765255545185688072                                                                                        │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:45 UTC │ 09 Dec 25 04:45 UTC │
	│ ssh     │ functional-667319 ssh mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates                                                                    │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:45 UTC │                     │
	│ ssh     │ functional-667319 ssh sudo umount -f /mount-9p                                                                                                      │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:45 UTC │ 09 Dec 25 04:45 UTC │
	│ ssh     │ functional-667319 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:45 UTC │                     │
	│ mount   │ -p functional-667319 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo4027315530/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:45 UTC │                     │
	│ ssh     │ functional-667319 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:45 UTC │ 09 Dec 25 04:45 UTC │
	│ ssh     │ functional-667319 ssh -- ls -la /mount-9p                                                                                                           │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:45 UTC │ 09 Dec 25 04:45 UTC │
	│ ssh     │ functional-667319 ssh sudo umount -f /mount-9p                                                                                                      │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:45 UTC │                     │
	│ mount   │ -p functional-667319 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3101865674/001:/mount1 --alsologtostderr -v=1                │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:45 UTC │                     │
	│ mount   │ -p functional-667319 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3101865674/001:/mount2 --alsologtostderr -v=1                │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:45 UTC │                     │
	│ ssh     │ functional-667319 ssh findmnt -T /mount1                                                                                                            │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:45 UTC │                     │
	│ mount   │ -p functional-667319 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3101865674/001:/mount3 --alsologtostderr -v=1                │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:45 UTC │                     │
	│ ssh     │ functional-667319 ssh findmnt -T /mount1                                                                                                            │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:45 UTC │ 09 Dec 25 04:45 UTC │
	│ ssh     │ functional-667319 ssh findmnt -T /mount2                                                                                                            │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:45 UTC │ 09 Dec 25 04:45 UTC │
	│ ssh     │ functional-667319 ssh findmnt -T /mount3                                                                                                            │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:45 UTC │ 09 Dec 25 04:45 UTC │
	│ mount   │ -p functional-667319 --kill=true                                                                                                                    │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:45 UTC │                     │
	│ addons  │ functional-667319 addons list                                                                                                                       │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:47 UTC │ 09 Dec 25 04:47 UTC │
	│ addons  │ functional-667319 addons list -o json                                                                                                               │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:47 UTC │ 09 Dec 25 04:47 UTC │
	│ service │ functional-667319 service list                                                                                                                      │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:47 UTC │                     │
	│ service │ functional-667319 service list -o json                                                                                                              │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:47 UTC │                     │
	│ service │ functional-667319 service --namespace=default --https --url hello-node                                                                              │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:47 UTC │                     │
	│ service │ functional-667319 service hello-node --url --format={{.IP}}                                                                                         │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:47 UTC │                     │
	│ service │ functional-667319 service hello-node --url                                                                                                          │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:47 UTC │                     │
	│ start   │ -p functional-667319 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:47 UTC │                     │
	│ start   │ -p functional-667319 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0           │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:47 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/09 04:47:31
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 04:47:31.812584 1211342 out.go:360] Setting OutFile to fd 1 ...
	I1209 04:47:31.812764 1211342 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 04:47:31.812794 1211342 out.go:374] Setting ErrFile to fd 2...
	I1209 04:47:31.812813 1211342 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 04:47:31.813182 1211342 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-1142328/.minikube/bin
	I1209 04:47:31.813627 1211342 out.go:368] Setting JSON to false
	I1209 04:47:31.814882 1211342 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":26975,"bootTime":1765228677,"procs":158,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1209 04:47:31.815125 1211342 start.go:143] virtualization:  
	I1209 04:47:31.818233 1211342 out.go:179] * [functional-667319] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1209 04:47:31.821944 1211342 out.go:179]   - MINIKUBE_LOCATION=22081
	I1209 04:47:31.822024 1211342 notify.go:221] Checking for updates...
	I1209 04:47:31.825581 1211342 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 04:47:31.828464 1211342 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22081-1142328/kubeconfig
	I1209 04:47:31.831263 1211342 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-1142328/.minikube
	I1209 04:47:31.834134 1211342 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1209 04:47:31.836988 1211342 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 04:47:31.840364 1211342 config.go:182] Loaded profile config "functional-667319": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1209 04:47:31.840964 1211342 driver.go:422] Setting default libvirt URI to qemu:///system
	I1209 04:47:31.869512 1211342 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1209 04:47:31.869630 1211342 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 04:47:31.928321 1211342 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-09 04:47:31.919404226 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1209 04:47:31.928424 1211342 docker.go:319] overlay module found
	I1209 04:47:31.931500 1211342 out.go:179] * Using the docker driver based on existing profile
	I1209 04:47:31.934315 1211342 start.go:309] selected driver: docker
	I1209 04:47:31.934333 1211342 start.go:927] validating driver "docker" against &{Name:functional-667319 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-667319 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 04:47:31.934423 1211342 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 04:47:31.934529 1211342 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 04:47:31.987536 1211342 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-09 04:47:31.97843202 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1209 04:47:31.987980 1211342 cni.go:84] Creating CNI manager for ""
	I1209 04:47:31.988151 1211342 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1209 04:47:31.988201 1211342 start.go:353] cluster config:
	{Name:functional-667319 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-667319 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 04:47:31.991222 1211342 out.go:179] * dry-run validation complete!
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 09 04:45:34 functional-667319 containerd[9667]: time="2025-12-09T04:45:34.304901430Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-667319\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 09 04:45:35 functional-667319 containerd[9667]: time="2025-12-09T04:45:35.122240132Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-667319\""
	Dec 09 04:45:35 functional-667319 containerd[9667]: time="2025-12-09T04:45:35.124969737Z" level=info msg="ImageDelete event name:\"docker.io/kicbase/echo-server:functional-667319\""
	Dec 09 04:45:35 functional-667319 containerd[9667]: time="2025-12-09T04:45:35.127379711Z" level=info msg="ImageDelete event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\""
	Dec 09 04:45:35 functional-667319 containerd[9667]: time="2025-12-09T04:45:35.136337700Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-667319\" returns successfully"
	Dec 09 04:45:35 functional-667319 containerd[9667]: time="2025-12-09T04:45:35.369054172Z" level=info msg="No images store for sha256:dd3309dec5df27eec01ab59220514c77e78d9b5409234aefaeee1c6a1c609658"
	Dec 09 04:45:35 functional-667319 containerd[9667]: time="2025-12-09T04:45:35.371319041Z" level=info msg="ImageCreate event name:\"docker.io/kicbase/echo-server:functional-667319\""
	Dec 09 04:45:35 functional-667319 containerd[9667]: time="2025-12-09T04:45:35.378181758Z" level=info msg="ImageCreate event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 09 04:45:35 functional-667319 containerd[9667]: time="2025-12-09T04:45:35.378777478Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-667319\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 09 04:45:36 functional-667319 containerd[9667]: time="2025-12-09T04:45:36.438263871Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-667319\""
	Dec 09 04:45:36 functional-667319 containerd[9667]: time="2025-12-09T04:45:36.441243432Z" level=info msg="ImageDelete event name:\"docker.io/kicbase/echo-server:functional-667319\""
	Dec 09 04:45:36 functional-667319 containerd[9667]: time="2025-12-09T04:45:36.443329590Z" level=info msg="ImageDelete event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\""
	Dec 09 04:45:36 functional-667319 containerd[9667]: time="2025-12-09T04:45:36.451953736Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-667319\" returns successfully"
	Dec 09 04:45:36 functional-667319 containerd[9667]: time="2025-12-09T04:45:36.699722091Z" level=info msg="No images store for sha256:dd3309dec5df27eec01ab59220514c77e78d9b5409234aefaeee1c6a1c609658"
	Dec 09 04:45:36 functional-667319 containerd[9667]: time="2025-12-09T04:45:36.702120561Z" level=info msg="ImageCreate event name:\"docker.io/kicbase/echo-server:functional-667319\""
	Dec 09 04:45:36 functional-667319 containerd[9667]: time="2025-12-09T04:45:36.709091689Z" level=info msg="ImageCreate event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 09 04:45:36 functional-667319 containerd[9667]: time="2025-12-09T04:45:36.709423744Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-667319\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 09 04:45:37 functional-667319 containerd[9667]: time="2025-12-09T04:45:37.473128393Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-667319\""
	Dec 09 04:45:37 functional-667319 containerd[9667]: time="2025-12-09T04:45:37.475631173Z" level=info msg="ImageDelete event name:\"docker.io/kicbase/echo-server:functional-667319\""
	Dec 09 04:45:37 functional-667319 containerd[9667]: time="2025-12-09T04:45:37.477592221Z" level=info msg="ImageDelete event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\""
	Dec 09 04:45:37 functional-667319 containerd[9667]: time="2025-12-09T04:45:37.490276659Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-667319\" returns successfully"
	Dec 09 04:45:38 functional-667319 containerd[9667]: time="2025-12-09T04:45:38.148598616Z" level=info msg="No images store for sha256:904ceb29077e75bbca4483a04b0d4e97cdb7c2e3a6b6f3f1bb70ace08229b0b3"
	Dec 09 04:45:38 functional-667319 containerd[9667]: time="2025-12-09T04:45:38.150763877Z" level=info msg="ImageCreate event name:\"docker.io/kicbase/echo-server:functional-667319\""
	Dec 09 04:45:38 functional-667319 containerd[9667]: time="2025-12-09T04:45:38.160850013Z" level=info msg="ImageCreate event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 09 04:45:38 functional-667319 containerd[9667]: time="2025-12-09T04:45:38.161468872Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-667319\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:47:33.861703   23578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:47:33.862569   23578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:47:33.864315   23578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:47:33.864720   23578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:47:33.866262   23578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 9 03:13] overlayfs: idmapped layers are currently not supported
	[ +25.904254] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:14] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:16] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:18] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:19] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:30] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:32] overlayfs: idmapped layers are currently not supported
	[ +28.114653] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:33] overlayfs: idmapped layers are currently not supported
	[ +23.720849] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:34] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:35] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:36] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:37] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:38] overlayfs: idmapped layers are currently not supported
	[ +23.656275] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:39] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:57] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:58] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:00] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:02] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:03] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:05] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:06] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 04:47:33 up  7:29,  0 user,  load average: 0.65, 0.32, 0.48
	Linux functional-667319 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 09 04:47:30 functional-667319 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 09 04:47:31 functional-667319 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 488.
	Dec 09 04:47:31 functional-667319 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:47:31 functional-667319 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:47:31 functional-667319 kubelet[23412]: E1209 04:47:31.504457   23412 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 09 04:47:31 functional-667319 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 09 04:47:31 functional-667319 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 09 04:47:32 functional-667319 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 489.
	Dec 09 04:47:32 functional-667319 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:47:32 functional-667319 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:47:32 functional-667319 kubelet[23427]: E1209 04:47:32.235649   23427 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 09 04:47:32 functional-667319 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 09 04:47:32 functional-667319 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 09 04:47:32 functional-667319 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 490.
	Dec 09 04:47:32 functional-667319 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:47:32 functional-667319 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:47:32 functional-667319 kubelet[23468]: E1209 04:47:32.904210   23468 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 09 04:47:32 functional-667319 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 09 04:47:32 functional-667319 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 09 04:47:33 functional-667319 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 491.
	Dec 09 04:47:33 functional-667319 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:47:33 functional-667319 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:47:33 functional-667319 kubelet[23546]: E1209 04:47:33.753162   23546 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 09 04:47:33 functional-667319 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 09 04:47:33 functional-667319 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-667319 -n functional-667319
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-667319 -n functional-667319: exit status 2 (314.189068ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-667319" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (2.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (2.34s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-667319 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1636: (dbg) Non-zero exit: kubectl --context functional-667319 create deployment hello-node-connect --image kicbase/echo-server: exit status 1 (57.518294ms)

                                                
                                                
** stderr ** 
	error: failed to create deployment: Post "https://192.168.49.2:8441/apis/apps/v1/namespaces/default/deployments?fieldManager=kubectl-create&fieldValidation=Strict": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test.go:1638: failed to create hello-node deployment with this command "kubectl --context functional-667319 create deployment hello-node-connect --image kicbase/echo-server": exit status 1.
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-667319 describe po hello-node-connect
functional_test.go:1612: (dbg) Non-zero exit: kubectl --context functional-667319 describe po hello-node-connect: exit status 1 (62.389508ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1614: "kubectl --context functional-667319 describe po hello-node-connect" failed: exit status 1
functional_test.go:1616: hello-node pod describe:
functional_test.go:1618: (dbg) Run:  kubectl --context functional-667319 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-667319 logs -l app=hello-node-connect: exit status 1 (62.074324ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-667319 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-667319 describe svc hello-node-connect
functional_test.go:1624: (dbg) Non-zero exit: kubectl --context functional-667319 describe svc hello-node-connect: exit status 1 (63.63976ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1626: "kubectl --context functional-667319 describe svc hello-node-connect" failed: exit status 1
functional_test.go:1628: hello-node svc describe:
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-667319
helpers_test.go:243: (dbg) docker inspect functional-667319:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e5b6511799c8d5c445a335a3bd5cc9a61b518fc27ac93dad8800da366ef32129",
	        "Created": "2025-12-09T04:18:34.060957311Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1182075,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-09T04:18:34.126944158Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:e4eb91ed18a24161fce60c7cdd660144ecd5b8c5029dc2dea2c5e423c2f48ce4",
	        "ResolvConfPath": "/var/lib/docker/containers/e5b6511799c8d5c445a335a3bd5cc9a61b518fc27ac93dad8800da366ef32129/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e5b6511799c8d5c445a335a3bd5cc9a61b518fc27ac93dad8800da366ef32129/hostname",
	        "HostsPath": "/var/lib/docker/containers/e5b6511799c8d5c445a335a3bd5cc9a61b518fc27ac93dad8800da366ef32129/hosts",
	        "LogPath": "/var/lib/docker/containers/e5b6511799c8d5c445a335a3bd5cc9a61b518fc27ac93dad8800da366ef32129/e5b6511799c8d5c445a335a3bd5cc9a61b518fc27ac93dad8800da366ef32129-json.log",
	        "Name": "/functional-667319",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-667319:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-667319",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e5b6511799c8d5c445a335a3bd5cc9a61b518fc27ac93dad8800da366ef32129",
	                "LowerDir": "/var/lib/docker/overlay2/b0239006282b6e4609a1f554d0a3fb94c749a13505795c8e4078cb2db194e8e0-init/diff:/var/lib/docker/overlay2/c44bb57aa59cc265266f37f2bb6e7ec0e7d641c3b4aeaa57e6d23deec6f0d1d4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b0239006282b6e4609a1f554d0a3fb94c749a13505795c8e4078cb2db194e8e0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b0239006282b6e4609a1f554d0a3fb94c749a13505795c8e4078cb2db194e8e0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b0239006282b6e4609a1f554d0a3fb94c749a13505795c8e4078cb2db194e8e0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-667319",
	                "Source": "/var/lib/docker/volumes/functional-667319/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-667319",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-667319",
	                "name.minikube.sigs.k8s.io": "functional-667319",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7c81dabcd9e57af9bce0bc0f5619f6ef3a27af43f4b649283a5bd778ab256415",
	            "SandboxKey": "/var/run/docker/netns/7c81dabcd9e5",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33900"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33901"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33904"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33902"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33903"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-667319": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "fe:40:bd:46:56:d8",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "88b3a65de70c15005c532a44219284d4df94e474ca5b78b04514c2f932b03beb",
	                    "EndpointID": "bdef7b156f4a28c1f641ae70b42db2750bb810ae6fe93fd65325e62eb232fe91",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-667319",
	                        "e5b6511799c8"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-667319 -n functional-667319
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-667319 -n functional-667319: exit status 2 (328.659171ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-667319 logs -n 25
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                        ARGS                                                                         │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ functional-667319 ssh -n functional-667319 sudo cat /tmp/does/not/exist/cp-test.txt                                                                 │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:45 UTC │ 09 Dec 25 04:45 UTC │
	│ ssh     │ functional-667319 ssh echo hello                                                                                                                    │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:45 UTC │ 09 Dec 25 04:45 UTC │
	│ ssh     │ functional-667319 ssh cat /etc/hostname                                                                                                             │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:45 UTC │ 09 Dec 25 04:45 UTC │
	│ mount   │ -p functional-667319 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2835554719/001:/mount-9p --alsologtostderr -v=1              │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:45 UTC │                     │
	│ ssh     │ functional-667319 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:45 UTC │                     │
	│ ssh     │ functional-667319 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:45 UTC │ 09 Dec 25 04:45 UTC │
	│ ssh     │ functional-667319 ssh -- ls -la /mount-9p                                                                                                           │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:45 UTC │ 09 Dec 25 04:45 UTC │
	│ ssh     │ functional-667319 ssh cat /mount-9p/test-1765255545185688072                                                                                        │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:45 UTC │ 09 Dec 25 04:45 UTC │
	│ ssh     │ functional-667319 ssh mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates                                                                    │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:45 UTC │                     │
	│ ssh     │ functional-667319 ssh sudo umount -f /mount-9p                                                                                                      │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:45 UTC │ 09 Dec 25 04:45 UTC │
	│ ssh     │ functional-667319 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:45 UTC │                     │
	│ mount   │ -p functional-667319 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo4027315530/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:45 UTC │                     │
	│ ssh     │ functional-667319 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:45 UTC │ 09 Dec 25 04:45 UTC │
	│ ssh     │ functional-667319 ssh -- ls -la /mount-9p                                                                                                           │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:45 UTC │ 09 Dec 25 04:45 UTC │
	│ ssh     │ functional-667319 ssh sudo umount -f /mount-9p                                                                                                      │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:45 UTC │                     │
	│ mount   │ -p functional-667319 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3101865674/001:/mount1 --alsologtostderr -v=1                │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:45 UTC │                     │
	│ mount   │ -p functional-667319 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3101865674/001:/mount2 --alsologtostderr -v=1                │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:45 UTC │                     │
	│ ssh     │ functional-667319 ssh findmnt -T /mount1                                                                                                            │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:45 UTC │                     │
	│ mount   │ -p functional-667319 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3101865674/001:/mount3 --alsologtostderr -v=1                │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:45 UTC │                     │
	│ ssh     │ functional-667319 ssh findmnt -T /mount1                                                                                                            │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:45 UTC │ 09 Dec 25 04:45 UTC │
	│ ssh     │ functional-667319 ssh findmnt -T /mount2                                                                                                            │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:45 UTC │ 09 Dec 25 04:45 UTC │
	│ ssh     │ functional-667319 ssh findmnt -T /mount3                                                                                                            │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:45 UTC │ 09 Dec 25 04:45 UTC │
	│ mount   │ -p functional-667319 --kill=true                                                                                                                    │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:45 UTC │                     │
	│ addons  │ functional-667319 addons list                                                                                                                       │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:47 UTC │ 09 Dec 25 04:47 UTC │
	│ addons  │ functional-667319 addons list -o json                                                                                                               │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:47 UTC │ 09 Dec 25 04:47 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/09 04:33:11
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 04:33:11.365325 1193189 out.go:360] Setting OutFile to fd 1 ...
	I1209 04:33:11.365424 1193189 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 04:33:11.365428 1193189 out.go:374] Setting ErrFile to fd 2...
	I1209 04:33:11.365431 1193189 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 04:33:11.365670 1193189 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-1142328/.minikube/bin
	I1209 04:33:11.366033 1193189 out.go:368] Setting JSON to false
	I1209 04:33:11.366848 1193189 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":26115,"bootTime":1765228677,"procs":156,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1209 04:33:11.366902 1193189 start.go:143] virtualization:  
	I1209 04:33:11.370321 1193189 out.go:179] * [functional-667319] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1209 04:33:11.373998 1193189 out.go:179]   - MINIKUBE_LOCATION=22081
	I1209 04:33:11.374082 1193189 notify.go:221] Checking for updates...
	I1209 04:33:11.379822 1193189 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 04:33:11.382611 1193189 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22081-1142328/kubeconfig
	I1209 04:33:11.385432 1193189 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-1142328/.minikube
	I1209 04:33:11.388728 1193189 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1209 04:33:11.391441 1193189 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 04:33:11.394813 1193189 config.go:182] Loaded profile config "functional-667319": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1209 04:33:11.394910 1193189 driver.go:422] Setting default libvirt URI to qemu:///system
	I1209 04:33:11.422551 1193189 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1209 04:33:11.422654 1193189 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 04:33:11.481358 1193189 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-09 04:33:11.472506561 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1209 04:33:11.481459 1193189 docker.go:319] overlay module found
	I1209 04:33:11.484471 1193189 out.go:179] * Using the docker driver based on existing profile
	I1209 04:33:11.487406 1193189 start.go:309] selected driver: docker
	I1209 04:33:11.487427 1193189 start.go:927] validating driver "docker" against &{Name:functional-667319 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-667319 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disa
bleCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 04:33:11.487512 1193189 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 04:33:11.487612 1193189 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 04:33:11.542290 1193189 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-09 04:33:11.533632532 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1209 04:33:11.542703 1193189 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 04:33:11.542726 1193189 cni.go:84] Creating CNI manager for ""
	I1209 04:33:11.542784 1193189 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1209 04:33:11.542826 1193189 start.go:353] cluster config:
	{Name:functional-667319 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-667319 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 04:33:11.546045 1193189 out.go:179] * Starting "functional-667319" primary control-plane node in "functional-667319" cluster
	I1209 04:33:11.548925 1193189 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1209 04:33:11.551638 1193189 out.go:179] * Pulling base image v0.0.48-1765184860-22066 ...
	I1209 04:33:11.554609 1193189 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1209 04:33:11.554645 1193189 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1209 04:33:11.554670 1193189 cache.go:65] Caching tarball of preloaded images
	I1209 04:33:11.554693 1193189 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local docker daemon
	I1209 04:33:11.554756 1193189 preload.go:238] Found /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1209 04:33:11.554765 1193189 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1209 04:33:11.554868 1193189 profile.go:143] Saving config to /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/config.json ...
	I1209 04:33:11.573683 1193189 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local docker daemon, skipping pull
	I1209 04:33:11.573695 1193189 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c exists in daemon, skipping load
	I1209 04:33:11.573713 1193189 cache.go:243] Successfully downloaded all kic artifacts
	I1209 04:33:11.573740 1193189 start.go:360] acquireMachinesLock for functional-667319: {Name:mk6c31f0747796f5f8ac8ea1653d6ee60fe2a47d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 04:33:11.573797 1193189 start.go:364] duration metric: took 42.739µs to acquireMachinesLock for "functional-667319"
	I1209 04:33:11.573815 1193189 start.go:96] Skipping create...Using existing machine configuration
	I1209 04:33:11.573819 1193189 fix.go:54] fixHost starting: 
	I1209 04:33:11.574074 1193189 cli_runner.go:164] Run: docker container inspect functional-667319 --format={{.State.Status}}
	I1209 04:33:11.589947 1193189 fix.go:112] recreateIfNeeded on functional-667319: state=Running err=<nil>
	W1209 04:33:11.589973 1193189 fix.go:138] unexpected machine state, will restart: <nil>
	I1209 04:33:11.593148 1193189 out.go:252] * Updating the running docker "functional-667319" container ...
	I1209 04:33:11.593168 1193189 machine.go:94] provisionDockerMachine start ...
	I1209 04:33:11.593256 1193189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-667319
	I1209 04:33:11.609392 1193189 main.go:143] libmachine: Using SSH client type: native
	I1209 04:33:11.609722 1193189 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 33900 <nil> <nil>}
	I1209 04:33:11.609729 1193189 main.go:143] libmachine: About to run SSH command:
	hostname
	I1209 04:33:11.759408 1193189 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-667319
	
	I1209 04:33:11.759422 1193189 ubuntu.go:182] provisioning hostname "functional-667319"
	I1209 04:33:11.759483 1193189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-667319
	I1209 04:33:11.776859 1193189 main.go:143] libmachine: Using SSH client type: native
	I1209 04:33:11.777189 1193189 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 33900 <nil> <nil>}
	I1209 04:33:11.777198 1193189 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-667319 && echo "functional-667319" | sudo tee /etc/hostname
	I1209 04:33:11.939211 1193189 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-667319
	
	I1209 04:33:11.939295 1193189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-667319
	I1209 04:33:11.957143 1193189 main.go:143] libmachine: Using SSH client type: native
	I1209 04:33:11.957494 1193189 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 33900 <nil> <nil>}
	I1209 04:33:11.957508 1193189 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-667319' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-667319/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-667319' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 04:33:12.113237 1193189 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1209 04:33:12.113254 1193189 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22081-1142328/.minikube CaCertPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22081-1142328/.minikube}
	I1209 04:33:12.113278 1193189 ubuntu.go:190] setting up certificates
	I1209 04:33:12.113294 1193189 provision.go:84] configureAuth start
	I1209 04:33:12.113362 1193189 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-667319
	I1209 04:33:12.130912 1193189 provision.go:143] copyHostCerts
	I1209 04:33:12.131003 1193189 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.pem, removing ...
	I1209 04:33:12.131010 1193189 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.pem
	I1209 04:33:12.131086 1193189 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.pem (1078 bytes)
	I1209 04:33:12.131177 1193189 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-1142328/.minikube/cert.pem, removing ...
	I1209 04:33:12.131181 1193189 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-1142328/.minikube/cert.pem
	I1209 04:33:12.131205 1193189 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22081-1142328/.minikube/cert.pem (1123 bytes)
	I1209 04:33:12.131250 1193189 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-1142328/.minikube/key.pem, removing ...
	I1209 04:33:12.131254 1193189 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-1142328/.minikube/key.pem
	I1209 04:33:12.131276 1193189 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22081-1142328/.minikube/key.pem (1675 bytes)
	I1209 04:33:12.131318 1193189 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca-key.pem org=jenkins.functional-667319 san=[127.0.0.1 192.168.49.2 functional-667319 localhost minikube]
	I1209 04:33:12.827484 1193189 provision.go:177] copyRemoteCerts
	I1209 04:33:12.827535 1193189 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 04:33:12.827573 1193189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-667319
	I1209 04:33:12.846654 1193189 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/functional-667319/id_rsa Username:docker}
	I1209 04:33:12.951639 1193189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1209 04:33:12.968320 1193189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1209 04:33:12.985745 1193189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1209 04:33:13.004711 1193189 provision.go:87] duration metric: took 891.395644ms to configureAuth
	I1209 04:33:13.004730 1193189 ubuntu.go:206] setting minikube options for container-runtime
	I1209 04:33:13.005000 1193189 config.go:182] Loaded profile config "functional-667319": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1209 04:33:13.005006 1193189 machine.go:97] duration metric: took 1.411833664s to provisionDockerMachine
	I1209 04:33:13.005012 1193189 start.go:293] postStartSetup for "functional-667319" (driver="docker")
	I1209 04:33:13.005022 1193189 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 04:33:13.005072 1193189 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 04:33:13.005108 1193189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-667319
	I1209 04:33:13.023376 1193189 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/functional-667319/id_rsa Username:docker}
	I1209 04:33:13.128032 1193189 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 04:33:13.131471 1193189 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1209 04:33:13.131490 1193189 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1209 04:33:13.131500 1193189 filesync.go:126] Scanning /home/jenkins/minikube-integration/22081-1142328/.minikube/addons for local assets ...
	I1209 04:33:13.131552 1193189 filesync.go:126] Scanning /home/jenkins/minikube-integration/22081-1142328/.minikube/files for local assets ...
	I1209 04:33:13.131625 1193189 filesync.go:149] local asset: /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/ssl/certs/11442312.pem -> 11442312.pem in /etc/ssl/certs
	I1209 04:33:13.131701 1193189 filesync.go:149] local asset: /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/test/nested/copy/1144231/hosts -> hosts in /etc/test/nested/copy/1144231
	I1209 04:33:13.131749 1193189 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1144231
	I1209 04:33:13.139091 1193189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/ssl/certs/11442312.pem --> /etc/ssl/certs/11442312.pem (1708 bytes)
	I1209 04:33:13.156114 1193189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/test/nested/copy/1144231/hosts --> /etc/test/nested/copy/1144231/hosts (40 bytes)
	I1209 04:33:13.173744 1193189 start.go:296] duration metric: took 168.716821ms for postStartSetup
	I1209 04:33:13.173816 1193189 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1209 04:33:13.173854 1193189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-667319
	I1209 04:33:13.198555 1193189 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/functional-667319/id_rsa Username:docker}
	I1209 04:33:13.300903 1193189 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1209 04:33:13.305102 1193189 fix.go:56] duration metric: took 1.731276319s for fixHost
	I1209 04:33:13.305116 1193189 start.go:83] releasing machines lock for "functional-667319", held for 1.731312428s
	I1209 04:33:13.305216 1193189 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-667319
	I1209 04:33:13.322301 1193189 ssh_runner.go:195] Run: cat /version.json
	I1209 04:33:13.322356 1193189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-667319
	I1209 04:33:13.322602 1193189 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 04:33:13.322654 1193189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-667319
	I1209 04:33:13.345854 1193189 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/functional-667319/id_rsa Username:docker}
	I1209 04:33:13.346808 1193189 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/functional-667319/id_rsa Username:docker}
	I1209 04:33:13.447601 1193189 ssh_runner.go:195] Run: systemctl --version
	I1209 04:33:13.537710 1193189 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 04:33:13.542181 1193189 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 04:33:13.542253 1193189 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 04:33:13.550371 1193189 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1209 04:33:13.550385 1193189 start.go:496] detecting cgroup driver to use...
	I1209 04:33:13.550417 1193189 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1209 04:33:13.550479 1193189 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1209 04:33:13.565987 1193189 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1209 04:33:13.579220 1193189 docker.go:218] disabling cri-docker service (if available) ...
	I1209 04:33:13.579279 1193189 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 04:33:13.594632 1193189 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 04:33:13.607810 1193189 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 04:33:13.745867 1193189 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 04:33:13.855372 1193189 docker.go:234] disabling docker service ...
	I1209 04:33:13.855434 1193189 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 04:33:13.878271 1193189 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 04:33:13.891442 1193189 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 04:33:14.014618 1193189 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 04:33:14.144235 1193189 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 04:33:14.157713 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 04:33:14.171634 1193189 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1209 04:33:14.180595 1193189 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1209 04:33:14.189855 1193189 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1209 04:33:14.189928 1193189 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1209 04:33:14.198663 1193189 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1209 04:33:14.207241 1193189 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1209 04:33:14.215864 1193189 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1209 04:33:14.224572 1193189 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 04:33:14.232585 1193189 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1209 04:33:14.241204 1193189 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1209 04:33:14.249919 1193189 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1209 04:33:14.258812 1193189 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 04:33:14.266241 1193189 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 04:33:14.273587 1193189 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 04:33:14.393428 1193189 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1209 04:33:14.528665 1193189 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1209 04:33:14.528726 1193189 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1209 04:33:14.532955 1193189 start.go:564] Will wait 60s for crictl version
	I1209 04:33:14.533056 1193189 ssh_runner.go:195] Run: which crictl
	I1209 04:33:14.541891 1193189 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1209 04:33:14.570282 1193189 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1209 04:33:14.570350 1193189 ssh_runner.go:195] Run: containerd --version
	I1209 04:33:14.592081 1193189 ssh_runner.go:195] Run: containerd --version
	I1209 04:33:14.617312 1193189 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1209 04:33:14.620294 1193189 cli_runner.go:164] Run: docker network inspect functional-667319 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1209 04:33:14.636105 1193189 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1209 04:33:14.643286 1193189 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1209 04:33:14.646097 1193189 kubeadm.go:884] updating cluster {Name:functional-667319 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-667319 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bina
ryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 04:33:14.646234 1193189 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1209 04:33:14.646312 1193189 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 04:33:14.671604 1193189 containerd.go:627] all images are preloaded for containerd runtime.
	I1209 04:33:14.671615 1193189 containerd.go:534] Images already preloaded, skipping extraction
	I1209 04:33:14.671676 1193189 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 04:33:14.702360 1193189 containerd.go:627] all images are preloaded for containerd runtime.
	I1209 04:33:14.702371 1193189 cache_images.go:86] Images are preloaded, skipping loading
	I1209 04:33:14.702376 1193189 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
	I1209 04:33:14.702482 1193189 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-667319 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-667319 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 04:33:14.702549 1193189 ssh_runner.go:195] Run: sudo crictl info
	I1209 04:33:14.731154 1193189 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1209 04:33:14.731172 1193189 cni.go:84] Creating CNI manager for ""
	I1209 04:33:14.731179 1193189 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1209 04:33:14.731190 1193189 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1209 04:33:14.731212 1193189 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-667319 NodeName:functional-667319 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false Kubel
etConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1209 04:33:14.731316 1193189 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-667319"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 04:33:14.731385 1193189 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1209 04:33:14.742794 1193189 binaries.go:51] Found k8s binaries, skipping transfer
	I1209 04:33:14.742854 1193189 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1209 04:33:14.750345 1193189 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1209 04:33:14.763345 1193189 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1209 04:33:14.775780 1193189 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2087 bytes)
	I1209 04:33:14.788798 1193189 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1209 04:33:14.792560 1193189 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 04:33:14.907792 1193189 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 04:33:15.431459 1193189 certs.go:69] Setting up /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319 for IP: 192.168.49.2
	I1209 04:33:15.431470 1193189 certs.go:195] generating shared ca certs ...
	I1209 04:33:15.431485 1193189 certs.go:227] acquiring lock for ca certs: {Name:mk15788702f8c4e23b5aeab3f44961d296fab259 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 04:33:15.431654 1193189 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.key
	I1209 04:33:15.431695 1193189 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/proxy-client-ca.key
	I1209 04:33:15.431701 1193189 certs.go:257] generating profile certs ...
	I1209 04:33:15.431782 1193189 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/client.key
	I1209 04:33:15.431840 1193189 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/apiserver.key.c80eb595
	I1209 04:33:15.431875 1193189 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/proxy-client.key
	I1209 04:33:15.431982 1193189 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/1144231.pem (1338 bytes)
	W1209 04:33:15.432037 1193189 certs.go:480] ignoring /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/1144231_empty.pem, impossibly tiny 0 bytes
	I1209 04:33:15.432046 1193189 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca-key.pem (1679 bytes)
	I1209 04:33:15.432075 1193189 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem (1078 bytes)
	I1209 04:33:15.432099 1193189 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/cert.pem (1123 bytes)
	I1209 04:33:15.432147 1193189 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/key.pem (1675 bytes)
	I1209 04:33:15.432195 1193189 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/ssl/certs/11442312.pem (1708 bytes)
	I1209 04:33:15.432796 1193189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 04:33:15.450868 1193189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1209 04:33:15.469951 1193189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 04:33:15.488029 1193189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1209 04:33:15.507676 1193189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1209 04:33:15.528269 1193189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1209 04:33:15.547354 1193189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 04:33:15.565510 1193189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1209 04:33:15.583378 1193189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/ssl/certs/11442312.pem --> /usr/share/ca-certificates/11442312.pem (1708 bytes)
	I1209 04:33:15.601546 1193189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 04:33:15.619028 1193189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/1144231.pem --> /usr/share/ca-certificates/1144231.pem (1338 bytes)
	I1209 04:33:15.636618 1193189 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 04:33:15.649310 1193189 ssh_runner.go:195] Run: openssl version
	I1209 04:33:15.655222 1193189 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/11442312.pem
	I1209 04:33:15.662530 1193189 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/11442312.pem /etc/ssl/certs/11442312.pem
	I1209 04:33:15.670168 1193189 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11442312.pem
	I1209 04:33:15.673829 1193189 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 04:18 /usr/share/ca-certificates/11442312.pem
	I1209 04:33:15.673881 1193189 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11442312.pem
	I1209 04:33:15.715756 1193189 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1209 04:33:15.723175 1193189 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1209 04:33:15.730584 1193189 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1209 04:33:15.738232 1193189 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 04:33:15.742081 1193189 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 04:09 /usr/share/ca-certificates/minikubeCA.pem
	I1209 04:33:15.742141 1193189 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 04:33:15.786133 1193189 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1209 04:33:15.793720 1193189 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1144231.pem
	I1209 04:33:15.801263 1193189 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1144231.pem /etc/ssl/certs/1144231.pem
	I1209 04:33:15.808357 1193189 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1144231.pem
	I1209 04:33:15.812098 1193189 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 04:18 /usr/share/ca-certificates/1144231.pem
	I1209 04:33:15.812149 1193189 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1144231.pem
	I1209 04:33:15.854297 1193189 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1209 04:33:15.861740 1193189 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 04:33:15.865303 1193189 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1209 04:33:15.905838 1193189 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1209 04:33:15.946617 1193189 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1209 04:33:15.987357 1193189 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1209 04:33:16.032170 1193189 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1209 04:33:16.075134 1193189 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1209 04:33:16.116540 1193189 kubeadm.go:401] StartCluster: {Name:functional-667319 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-667319 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 04:33:16.116615 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1209 04:33:16.116676 1193189 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 04:33:16.141721 1193189 cri.go:89] found id: ""
	I1209 04:33:16.141780 1193189 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1209 04:33:16.149204 1193189 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1209 04:33:16.149214 1193189 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1209 04:33:16.149263 1193189 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1209 04:33:16.156279 1193189 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1209 04:33:16.156783 1193189 kubeconfig.go:125] found "functional-667319" server: "https://192.168.49.2:8441"
	I1209 04:33:16.159840 1193189 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1209 04:33:16.167426 1193189 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-09 04:18:41.945308258 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-09 04:33:14.782796805 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1209 04:33:16.167445 1193189 kubeadm.go:1161] stopping kube-system containers ...
	I1209 04:33:16.167459 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I1209 04:33:16.167517 1193189 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 04:33:16.201963 1193189 cri.go:89] found id: ""
	I1209 04:33:16.202024 1193189 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1209 04:33:16.219973 1193189 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 04:33:16.227472 1193189 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5635 Dec  9 04:22 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Dec  9 04:22 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5672 Dec  9 04:22 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5588 Dec  9 04:22 /etc/kubernetes/scheduler.conf
	
	I1209 04:33:16.227532 1193189 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1209 04:33:16.234796 1193189 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1209 04:33:16.241862 1193189 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1209 04:33:16.241916 1193189 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 04:33:16.249083 1193189 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1209 04:33:16.256206 1193189 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1209 04:33:16.256261 1193189 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 04:33:16.263352 1193189 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1209 04:33:16.270362 1193189 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1209 04:33:16.270416 1193189 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 04:33:16.277706 1193189 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 04:33:16.285107 1193189 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 04:33:16.327899 1193189 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 04:33:17.810490 1193189 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.482563431s)
	I1209 04:33:17.810548 1193189 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1209 04:33:18.017563 1193189 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 04:33:18.086202 1193189 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1209 04:33:18.134715 1193189 api_server.go:52] waiting for apiserver process to appear ...
	I1209 04:33:18.134785 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:18.635261 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:19.135782 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:19.634982 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:20.134970 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:20.634979 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:21.134982 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:21.634901 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:22.135638 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:22.635624 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:23.134983 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:23.634978 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:24.135473 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:24.634966 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:25.135742 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:25.635347 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:26.134954 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:26.635380 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:27.134976 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:27.635752 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:28.135296 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:28.634924 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:29.134984 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:29.635367 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:30.135822 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:30.635721 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:31.135397 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:31.635633 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:32.134956 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:32.634993 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:33.135921 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:33.635624 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:34.134951 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:34.635593 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:35.134950 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:35.634961 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:36.134953 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:36.634877 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:37.135675 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:37.634982 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:38.135060 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:38.635809 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:39.135591 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:39.634959 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:40.135841 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:40.635611 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:41.135199 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:41.635170 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:42.134924 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:42.634948 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:43.135679 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:43.635637 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:44.134963 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:44.634963 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:45.135229 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:45.635702 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:46.134937 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:46.634881 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:47.135215 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:47.634980 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:48.134999 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:48.635744 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:49.135351 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:49.634915 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:50.135024 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:50.634852 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:51.134961 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:51.635396 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:52.135636 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:52.635513 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:53.135240 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:53.634952 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:54.135504 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:54.634869 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:55.135747 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:55.635267 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:56.135830 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:56.635547 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:57.134988 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:57.635506 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:58.135689 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:58.634992 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:59.135820 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:59.635373 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:00.135881 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:00.634984 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:01.135667 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:01.635758 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:02.135376 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:02.635880 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:03.135850 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:03.635021 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:04.135603 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:04.634975 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:05.135311 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:05.635291 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:06.135867 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:06.635018 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:07.135547 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:07.634967 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:08.134945 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:08.634950 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:09.135735 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:09.635308 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:10.135291 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:10.635185 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:11.134976 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:11.635433 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:12.134976 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:12.634985 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:13.134972 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:13.634991 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:14.135750 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:14.635398 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:15.135547 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:15.635003 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:16.135840 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:16.635833 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:17.135311 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:17.635902 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:18.135877 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:34:18.135980 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:34:18.160417 1193189 cri.go:89] found id: ""
	I1209 04:34:18.160431 1193189 logs.go:282] 0 containers: []
	W1209 04:34:18.160438 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:34:18.160442 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:34:18.160499 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:34:18.186014 1193189 cri.go:89] found id: ""
	I1209 04:34:18.186028 1193189 logs.go:282] 0 containers: []
	W1209 04:34:18.186035 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:34:18.186040 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:34:18.186102 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:34:18.209963 1193189 cri.go:89] found id: ""
	I1209 04:34:18.209977 1193189 logs.go:282] 0 containers: []
	W1209 04:34:18.209983 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:34:18.209989 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:34:18.210048 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:34:18.234704 1193189 cri.go:89] found id: ""
	I1209 04:34:18.234723 1193189 logs.go:282] 0 containers: []
	W1209 04:34:18.234730 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:34:18.234737 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:34:18.234794 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:34:18.260085 1193189 cri.go:89] found id: ""
	I1209 04:34:18.260100 1193189 logs.go:282] 0 containers: []
	W1209 04:34:18.260107 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:34:18.260112 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:34:18.260170 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:34:18.284959 1193189 cri.go:89] found id: ""
	I1209 04:34:18.284972 1193189 logs.go:282] 0 containers: []
	W1209 04:34:18.284978 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:34:18.284983 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:34:18.285040 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:34:18.313883 1193189 cri.go:89] found id: ""
	I1209 04:34:18.313898 1193189 logs.go:282] 0 containers: []
	W1209 04:34:18.313905 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:34:18.313912 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:34:18.313923 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:34:18.330120 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:34:18.330138 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:34:18.391936 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:34:18.383205   10728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:18.383825   10728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:18.385661   10728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:18.386372   10728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:18.388209   10728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:34:18.383205   10728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:18.383825   10728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:18.385661   10728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:18.386372   10728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:18.388209   10728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:34:18.391947 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:34:18.391957 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:34:18.457339 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:34:18.457361 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:34:18.484687 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:34:18.484702 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:34:21.045358 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:21.056486 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:34:21.056551 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:34:21.085673 1193189 cri.go:89] found id: ""
	I1209 04:34:21.085687 1193189 logs.go:282] 0 containers: []
	W1209 04:34:21.085693 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:34:21.085699 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:34:21.085758 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:34:21.111043 1193189 cri.go:89] found id: ""
	I1209 04:34:21.111056 1193189 logs.go:282] 0 containers: []
	W1209 04:34:21.111063 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:34:21.111068 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:34:21.111128 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:34:21.137031 1193189 cri.go:89] found id: ""
	I1209 04:34:21.137044 1193189 logs.go:282] 0 containers: []
	W1209 04:34:21.137051 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:34:21.137057 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:34:21.137118 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:34:21.161998 1193189 cri.go:89] found id: ""
	I1209 04:34:21.162012 1193189 logs.go:282] 0 containers: []
	W1209 04:34:21.162019 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:34:21.162024 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:34:21.162088 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:34:21.185710 1193189 cri.go:89] found id: ""
	I1209 04:34:21.185733 1193189 logs.go:282] 0 containers: []
	W1209 04:34:21.185740 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:34:21.185745 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:34:21.185805 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:34:21.209921 1193189 cri.go:89] found id: ""
	I1209 04:34:21.209934 1193189 logs.go:282] 0 containers: []
	W1209 04:34:21.209941 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:34:21.209946 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:34:21.210007 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:34:21.237263 1193189 cri.go:89] found id: ""
	I1209 04:34:21.237277 1193189 logs.go:282] 0 containers: []
	W1209 04:34:21.237284 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:34:21.237291 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:34:21.237302 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:34:21.253947 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:34:21.253964 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:34:21.323683 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:34:21.314716   10833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:21.315370   10833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:21.316976   10833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:21.317539   10833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:21.318471   10833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:34:21.314716   10833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:21.315370   10833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:21.316976   10833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:21.317539   10833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:21.318471   10833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:34:21.323693 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:34:21.323704 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:34:21.385947 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:34:21.385968 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:34:21.414692 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:34:21.414709 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:34:23.972329 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:23.982273 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:34:23.982333 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:34:24.008968 1193189 cri.go:89] found id: ""
	I1209 04:34:24.008983 1193189 logs.go:282] 0 containers: []
	W1209 04:34:24.008997 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:34:24.009002 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:34:24.009067 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:34:24.035053 1193189 cri.go:89] found id: ""
	I1209 04:34:24.035067 1193189 logs.go:282] 0 containers: []
	W1209 04:34:24.035074 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:34:24.035082 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:34:24.035155 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:34:24.060177 1193189 cri.go:89] found id: ""
	I1209 04:34:24.060202 1193189 logs.go:282] 0 containers: []
	W1209 04:34:24.060210 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:34:24.060215 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:34:24.060278 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:34:24.087352 1193189 cri.go:89] found id: ""
	I1209 04:34:24.087365 1193189 logs.go:282] 0 containers: []
	W1209 04:34:24.087372 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:34:24.087377 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:34:24.087436 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:34:24.112436 1193189 cri.go:89] found id: ""
	I1209 04:34:24.112450 1193189 logs.go:282] 0 containers: []
	W1209 04:34:24.112457 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:34:24.112463 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:34:24.112523 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:34:24.138043 1193189 cri.go:89] found id: ""
	I1209 04:34:24.138057 1193189 logs.go:282] 0 containers: []
	W1209 04:34:24.138063 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:34:24.138068 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:34:24.138127 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:34:24.162473 1193189 cri.go:89] found id: ""
	I1209 04:34:24.162486 1193189 logs.go:282] 0 containers: []
	W1209 04:34:24.162493 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:34:24.162501 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:34:24.162512 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:34:24.218725 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:34:24.218750 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:34:24.237014 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:34:24.237032 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:34:24.301761 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:34:24.293159   10942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:24.293842   10942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:24.295579   10942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:24.296219   10942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:24.297932   10942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:34:24.293159   10942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:24.293842   10942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:24.295579   10942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:24.296219   10942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:24.297932   10942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:34:24.301771 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:34:24.301782 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:34:24.364794 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:34:24.364819 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:34:26.896098 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:26.905998 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:34:26.906059 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:34:26.937370 1193189 cri.go:89] found id: ""
	I1209 04:34:26.937384 1193189 logs.go:282] 0 containers: []
	W1209 04:34:26.937390 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:34:26.937395 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:34:26.937455 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:34:26.961993 1193189 cri.go:89] found id: ""
	I1209 04:34:26.962006 1193189 logs.go:282] 0 containers: []
	W1209 04:34:26.962013 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:34:26.962018 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:34:26.962075 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:34:26.991456 1193189 cri.go:89] found id: ""
	I1209 04:34:26.991470 1193189 logs.go:282] 0 containers: []
	W1209 04:34:26.991476 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:34:26.991495 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:34:26.991554 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:34:27.018891 1193189 cri.go:89] found id: ""
	I1209 04:34:27.018904 1193189 logs.go:282] 0 containers: []
	W1209 04:34:27.018911 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:34:27.018916 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:34:27.018974 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:34:27.043050 1193189 cri.go:89] found id: ""
	I1209 04:34:27.043064 1193189 logs.go:282] 0 containers: []
	W1209 04:34:27.043070 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:34:27.043083 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:34:27.043141 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:34:27.069538 1193189 cri.go:89] found id: ""
	I1209 04:34:27.069553 1193189 logs.go:282] 0 containers: []
	W1209 04:34:27.069559 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:34:27.069564 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:34:27.069624 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:34:27.092560 1193189 cri.go:89] found id: ""
	I1209 04:34:27.092573 1193189 logs.go:282] 0 containers: []
	W1209 04:34:27.092580 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:34:27.092588 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:34:27.092597 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:34:27.149471 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:34:27.149509 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:34:27.166396 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:34:27.166413 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:34:27.233147 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:34:27.224772   11045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:27.225484   11045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:27.227169   11045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:27.227648   11045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:27.229172   11045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:34:27.224772   11045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:27.225484   11045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:27.227169   11045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:27.227648   11045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:27.229172   11045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:34:27.233160 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:34:27.233171 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:34:27.300582 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:34:27.300607 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:34:29.831076 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:29.841031 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:34:29.841110 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:34:29.870054 1193189 cri.go:89] found id: ""
	I1209 04:34:29.870068 1193189 logs.go:282] 0 containers: []
	W1209 04:34:29.870074 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:34:29.870080 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:34:29.870148 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:34:29.893884 1193189 cri.go:89] found id: ""
	I1209 04:34:29.893897 1193189 logs.go:282] 0 containers: []
	W1209 04:34:29.893904 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:34:29.893909 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:34:29.893984 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:34:29.917545 1193189 cri.go:89] found id: ""
	I1209 04:34:29.917559 1193189 logs.go:282] 0 containers: []
	W1209 04:34:29.917565 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:34:29.917570 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:34:29.917636 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:34:29.948707 1193189 cri.go:89] found id: ""
	I1209 04:34:29.948721 1193189 logs.go:282] 0 containers: []
	W1209 04:34:29.948727 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:34:29.948733 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:34:29.948792 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:34:29.988977 1193189 cri.go:89] found id: ""
	I1209 04:34:29.988990 1193189 logs.go:282] 0 containers: []
	W1209 04:34:29.988997 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:34:29.989003 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:34:29.989058 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:34:30.029614 1193189 cri.go:89] found id: ""
	I1209 04:34:30.029653 1193189 logs.go:282] 0 containers: []
	W1209 04:34:30.029660 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:34:30.029666 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:34:30.029747 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:34:30.057862 1193189 cri.go:89] found id: ""
	I1209 04:34:30.057877 1193189 logs.go:282] 0 containers: []
	W1209 04:34:30.057884 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:34:30.057892 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:34:30.057903 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:34:30.125643 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:34:30.125665 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:34:30.154365 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:34:30.154393 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:34:30.218342 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:34:30.218370 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:34:30.235415 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:34:30.235438 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:34:30.300328 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:34:30.292511   11158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:30.293159   11158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:30.294635   11158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:30.295041   11158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:30.296473   11158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:34:30.292511   11158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:30.293159   11158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:30.294635   11158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:30.295041   11158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:30.296473   11158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:34:32.800607 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:32.810690 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:34:32.810752 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:34:32.837030 1193189 cri.go:89] found id: ""
	I1209 04:34:32.837045 1193189 logs.go:282] 0 containers: []
	W1209 04:34:32.837052 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:34:32.837058 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:34:32.837136 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:34:32.863207 1193189 cri.go:89] found id: ""
	I1209 04:34:32.863221 1193189 logs.go:282] 0 containers: []
	W1209 04:34:32.863227 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:34:32.863242 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:34:32.863302 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:34:32.888280 1193189 cri.go:89] found id: ""
	I1209 04:34:32.888294 1193189 logs.go:282] 0 containers: []
	W1209 04:34:32.888301 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:34:32.888306 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:34:32.888365 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:34:32.912361 1193189 cri.go:89] found id: ""
	I1209 04:34:32.912375 1193189 logs.go:282] 0 containers: []
	W1209 04:34:32.912381 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:34:32.912387 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:34:32.912447 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:34:32.944341 1193189 cri.go:89] found id: ""
	I1209 04:34:32.944355 1193189 logs.go:282] 0 containers: []
	W1209 04:34:32.944363 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:34:32.944368 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:34:32.944427 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:34:32.974577 1193189 cri.go:89] found id: ""
	I1209 04:34:32.974592 1193189 logs.go:282] 0 containers: []
	W1209 04:34:32.974599 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:34:32.974604 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:34:32.974667 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:34:33.007167 1193189 cri.go:89] found id: ""
	I1209 04:34:33.007182 1193189 logs.go:282] 0 containers: []
	W1209 04:34:33.007188 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:34:33.007197 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:34:33.007208 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:34:33.072653 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:34:33.064421   11240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:33.065259   11240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:33.066881   11240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:33.067179   11240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:33.068654   11240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:34:33.064421   11240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:33.065259   11240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:33.066881   11240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:33.067179   11240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:33.068654   11240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:34:33.072662 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:34:33.072674 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:34:33.135053 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:34:33.135075 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:34:33.166357 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:34:33.166374 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:34:33.223824 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:34:33.223844 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:34:35.741231 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:35.751318 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:34:35.751378 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:34:35.776735 1193189 cri.go:89] found id: ""
	I1209 04:34:35.776749 1193189 logs.go:282] 0 containers: []
	W1209 04:34:35.776755 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:34:35.776760 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:34:35.776825 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:34:35.805165 1193189 cri.go:89] found id: ""
	I1209 04:34:35.805178 1193189 logs.go:282] 0 containers: []
	W1209 04:34:35.805185 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:34:35.805190 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:34:35.805255 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:34:35.834579 1193189 cri.go:89] found id: ""
	I1209 04:34:35.834592 1193189 logs.go:282] 0 containers: []
	W1209 04:34:35.834599 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:34:35.834604 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:34:35.834668 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:34:35.864666 1193189 cri.go:89] found id: ""
	I1209 04:34:35.864680 1193189 logs.go:282] 0 containers: []
	W1209 04:34:35.864687 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:34:35.864692 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:34:35.864753 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:34:35.888987 1193189 cri.go:89] found id: ""
	I1209 04:34:35.889001 1193189 logs.go:282] 0 containers: []
	W1209 04:34:35.889008 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:34:35.889013 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:34:35.889073 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:34:35.913760 1193189 cri.go:89] found id: ""
	I1209 04:34:35.913774 1193189 logs.go:282] 0 containers: []
	W1209 04:34:35.913781 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:34:35.913787 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:34:35.913848 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:34:35.953491 1193189 cri.go:89] found id: ""
	I1209 04:34:35.953504 1193189 logs.go:282] 0 containers: []
	W1209 04:34:35.953511 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:34:35.953519 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:34:35.953529 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:34:36.017926 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:34:36.017947 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:34:36.036525 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:34:36.036542 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:34:36.100279 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:34:36.091110   11352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:36.091738   11352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:36.093351   11352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:36.093993   11352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:36.095716   11352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:34:36.091110   11352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:36.091738   11352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:36.093351   11352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:36.093993   11352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:36.095716   11352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:34:36.100289 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:34:36.100302 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:34:36.165176 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:34:36.165198 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:34:38.692274 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:38.702150 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:34:38.702209 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:34:38.727703 1193189 cri.go:89] found id: ""
	I1209 04:34:38.727718 1193189 logs.go:282] 0 containers: []
	W1209 04:34:38.727725 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:34:38.727739 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:34:38.727802 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:34:38.752490 1193189 cri.go:89] found id: ""
	I1209 04:34:38.752509 1193189 logs.go:282] 0 containers: []
	W1209 04:34:38.752515 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:34:38.752521 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:34:38.752582 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:34:38.776648 1193189 cri.go:89] found id: ""
	I1209 04:34:38.776662 1193189 logs.go:282] 0 containers: []
	W1209 04:34:38.776668 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:34:38.776676 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:34:38.776735 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:34:38.801762 1193189 cri.go:89] found id: ""
	I1209 04:34:38.801775 1193189 logs.go:282] 0 containers: []
	W1209 04:34:38.801782 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:34:38.801788 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:34:38.801849 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:34:38.825649 1193189 cri.go:89] found id: ""
	I1209 04:34:38.825662 1193189 logs.go:282] 0 containers: []
	W1209 04:34:38.825668 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:34:38.825673 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:34:38.825734 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:34:38.850253 1193189 cri.go:89] found id: ""
	I1209 04:34:38.850268 1193189 logs.go:282] 0 containers: []
	W1209 04:34:38.850274 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:34:38.850280 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:34:38.850342 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:34:38.878018 1193189 cri.go:89] found id: ""
	I1209 04:34:38.878032 1193189 logs.go:282] 0 containers: []
	W1209 04:34:38.878039 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:34:38.878046 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:34:38.878056 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:34:38.937715 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:34:38.937734 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:34:38.956265 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:34:38.956289 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:34:39.027118 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:34:39.019252   11455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:39.020066   11455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:39.021610   11455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:39.021907   11455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:39.023382   11455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:34:39.019252   11455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:39.020066   11455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:39.021610   11455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:39.021907   11455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:39.023382   11455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:34:39.027128 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:34:39.027140 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:34:39.093921 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:34:39.093942 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:34:41.623796 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:41.634102 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:34:41.634167 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:34:41.661702 1193189 cri.go:89] found id: ""
	I1209 04:34:41.661716 1193189 logs.go:282] 0 containers: []
	W1209 04:34:41.661723 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:34:41.661728 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:34:41.661793 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:34:41.686941 1193189 cri.go:89] found id: ""
	I1209 04:34:41.686955 1193189 logs.go:282] 0 containers: []
	W1209 04:34:41.686962 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:34:41.686967 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:34:41.687026 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:34:41.716790 1193189 cri.go:89] found id: ""
	I1209 04:34:41.716805 1193189 logs.go:282] 0 containers: []
	W1209 04:34:41.716813 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:34:41.716818 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:34:41.716881 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:34:41.741120 1193189 cri.go:89] found id: ""
	I1209 04:34:41.741135 1193189 logs.go:282] 0 containers: []
	W1209 04:34:41.741141 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:34:41.741147 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:34:41.741206 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:34:41.765600 1193189 cri.go:89] found id: ""
	I1209 04:34:41.765614 1193189 logs.go:282] 0 containers: []
	W1209 04:34:41.765622 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:34:41.765627 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:34:41.765687 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:34:41.789956 1193189 cri.go:89] found id: ""
	I1209 04:34:41.789971 1193189 logs.go:282] 0 containers: []
	W1209 04:34:41.789978 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:34:41.789983 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:34:41.790047 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:34:41.813854 1193189 cri.go:89] found id: ""
	I1209 04:34:41.813868 1193189 logs.go:282] 0 containers: []
	W1209 04:34:41.813875 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:34:41.813883 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:34:41.813893 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:34:41.869283 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:34:41.869303 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:34:41.886263 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:34:41.886279 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:34:41.966783 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:34:41.957901   11556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:41.958580   11556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:41.960469   11556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:41.961191   11556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:41.962837   11556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:34:41.957901   11556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:41.958580   11556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:41.960469   11556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:41.961191   11556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:41.962837   11556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:34:41.966793 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:34:41.966810 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:34:42.035421 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:34:42.035443 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:34:44.567350 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:44.577592 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:34:44.577656 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:34:44.607032 1193189 cri.go:89] found id: ""
	I1209 04:34:44.607047 1193189 logs.go:282] 0 containers: []
	W1209 04:34:44.607054 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:34:44.607059 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:34:44.607119 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:34:44.632031 1193189 cri.go:89] found id: ""
	I1209 04:34:44.632045 1193189 logs.go:282] 0 containers: []
	W1209 04:34:44.632052 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:34:44.632057 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:34:44.632116 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:34:44.656224 1193189 cri.go:89] found id: ""
	I1209 04:34:44.656237 1193189 logs.go:282] 0 containers: []
	W1209 04:34:44.656244 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:34:44.656249 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:34:44.656308 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:34:44.680302 1193189 cri.go:89] found id: ""
	I1209 04:34:44.680317 1193189 logs.go:282] 0 containers: []
	W1209 04:34:44.680323 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:34:44.680329 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:34:44.680389 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:34:44.705286 1193189 cri.go:89] found id: ""
	I1209 04:34:44.705301 1193189 logs.go:282] 0 containers: []
	W1209 04:34:44.705308 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:34:44.705319 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:34:44.705380 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:34:44.729365 1193189 cri.go:89] found id: ""
	I1209 04:34:44.729378 1193189 logs.go:282] 0 containers: []
	W1209 04:34:44.729385 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:34:44.729391 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:34:44.729452 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:34:44.753588 1193189 cri.go:89] found id: ""
	I1209 04:34:44.753601 1193189 logs.go:282] 0 containers: []
	W1209 04:34:44.753608 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:34:44.753616 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:34:44.753626 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:34:44.809786 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:34:44.809806 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:34:44.827005 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:34:44.827023 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:34:44.888308 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:34:44.880071   11658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:44.880850   11658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:44.882536   11658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:44.882961   11658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:44.884478   11658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:34:44.880071   11658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:44.880850   11658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:44.882536   11658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:44.882961   11658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:44.884478   11658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:34:44.888318 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:34:44.888329 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:34:44.955975 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:34:44.955994 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:34:47.492101 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:47.502461 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:34:47.502521 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:34:47.527075 1193189 cri.go:89] found id: ""
	I1209 04:34:47.527089 1193189 logs.go:282] 0 containers: []
	W1209 04:34:47.527095 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:34:47.527109 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:34:47.527168 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:34:47.552346 1193189 cri.go:89] found id: ""
	I1209 04:34:47.552361 1193189 logs.go:282] 0 containers: []
	W1209 04:34:47.552368 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:34:47.552372 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:34:47.552439 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:34:47.577991 1193189 cri.go:89] found id: ""
	I1209 04:34:47.578005 1193189 logs.go:282] 0 containers: []
	W1209 04:34:47.578011 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:34:47.578017 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:34:47.578077 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:34:47.601711 1193189 cri.go:89] found id: ""
	I1209 04:34:47.601726 1193189 logs.go:282] 0 containers: []
	W1209 04:34:47.601733 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:34:47.601738 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:34:47.601799 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:34:47.626261 1193189 cri.go:89] found id: ""
	I1209 04:34:47.626274 1193189 logs.go:282] 0 containers: []
	W1209 04:34:47.626281 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:34:47.626287 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:34:47.626346 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:34:47.650195 1193189 cri.go:89] found id: ""
	I1209 04:34:47.650209 1193189 logs.go:282] 0 containers: []
	W1209 04:34:47.650215 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:34:47.650222 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:34:47.650289 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:34:47.674818 1193189 cri.go:89] found id: ""
	I1209 04:34:47.674844 1193189 logs.go:282] 0 containers: []
	W1209 04:34:47.674851 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:34:47.674858 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:34:47.674868 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:34:47.730669 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:34:47.730689 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:34:47.747530 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:34:47.747553 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:34:47.809873 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:34:47.800913   11766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:47.801626   11766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:47.803387   11766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:47.804067   11766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:47.805583   11766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:34:47.800913   11766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:47.801626   11766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:47.803387   11766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:47.804067   11766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:47.805583   11766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:34:47.809893 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:34:47.809905 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:34:47.871413 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:34:47.871433 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:34:50.398661 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:50.408687 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:34:50.408759 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:34:50.432488 1193189 cri.go:89] found id: ""
	I1209 04:34:50.432507 1193189 logs.go:282] 0 containers: []
	W1209 04:34:50.432514 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:34:50.432520 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:34:50.432581 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:34:50.456531 1193189 cri.go:89] found id: ""
	I1209 04:34:50.456545 1193189 logs.go:282] 0 containers: []
	W1209 04:34:50.456552 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:34:50.456557 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:34:50.456617 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:34:50.484856 1193189 cri.go:89] found id: ""
	I1209 04:34:50.484871 1193189 logs.go:282] 0 containers: []
	W1209 04:34:50.484878 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:34:50.484884 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:34:50.484946 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:34:50.510277 1193189 cri.go:89] found id: ""
	I1209 04:34:50.510291 1193189 logs.go:282] 0 containers: []
	W1209 04:34:50.510297 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:34:50.510302 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:34:50.510361 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:34:50.533718 1193189 cri.go:89] found id: ""
	I1209 04:34:50.533744 1193189 logs.go:282] 0 containers: []
	W1209 04:34:50.533751 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:34:50.533756 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:34:50.533823 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:34:50.556925 1193189 cri.go:89] found id: ""
	I1209 04:34:50.556939 1193189 logs.go:282] 0 containers: []
	W1209 04:34:50.556945 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:34:50.556951 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:34:50.557010 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:34:50.581553 1193189 cri.go:89] found id: ""
	I1209 04:34:50.581567 1193189 logs.go:282] 0 containers: []
	W1209 04:34:50.581574 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:34:50.581582 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:34:50.581592 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:34:50.640077 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:34:50.640096 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:34:50.657419 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:34:50.657435 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:34:50.717755 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:34:50.710080   11873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:50.710723   11873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:50.711869   11873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:50.712446   11873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:50.713899   11873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:34:50.710080   11873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:50.710723   11873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:50.711869   11873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:50.712446   11873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:50.713899   11873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:34:50.717765 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:34:50.717775 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:34:50.784823 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:34:50.784842 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:34:53.324166 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:53.333904 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:34:53.333963 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:34:53.357773 1193189 cri.go:89] found id: ""
	I1209 04:34:53.357787 1193189 logs.go:282] 0 containers: []
	W1209 04:34:53.357794 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:34:53.357799 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:34:53.357869 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:34:53.381476 1193189 cri.go:89] found id: ""
	I1209 04:34:53.381490 1193189 logs.go:282] 0 containers: []
	W1209 04:34:53.381498 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:34:53.381504 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:34:53.381563 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:34:53.404639 1193189 cri.go:89] found id: ""
	I1209 04:34:53.404653 1193189 logs.go:282] 0 containers: []
	W1209 04:34:53.404671 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:34:53.404677 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:34:53.404737 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:34:53.428572 1193189 cri.go:89] found id: ""
	I1209 04:34:53.428586 1193189 logs.go:282] 0 containers: []
	W1209 04:34:53.428593 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:34:53.428598 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:34:53.428656 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:34:53.453240 1193189 cri.go:89] found id: ""
	I1209 04:34:53.453254 1193189 logs.go:282] 0 containers: []
	W1209 04:34:53.453261 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:34:53.453266 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:34:53.453325 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:34:53.478715 1193189 cri.go:89] found id: ""
	I1209 04:34:53.478728 1193189 logs.go:282] 0 containers: []
	W1209 04:34:53.478735 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:34:53.478740 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:34:53.478798 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:34:53.503483 1193189 cri.go:89] found id: ""
	I1209 04:34:53.503497 1193189 logs.go:282] 0 containers: []
	W1209 04:34:53.503503 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:34:53.503511 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:34:53.503522 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:34:53.569898 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:34:53.560949   11970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:53.561857   11970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:53.563361   11970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:53.563947   11970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:53.565706   11970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:34:53.560949   11970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:53.561857   11970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:53.563361   11970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:53.563947   11970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:53.565706   11970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:34:53.569907 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:34:53.569918 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:34:53.631345 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:34:53.631366 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:34:53.657935 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:34:53.657951 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:34:53.717129 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:34:53.717148 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:34:56.235149 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:56.245451 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:34:56.245512 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:34:56.273858 1193189 cri.go:89] found id: ""
	I1209 04:34:56.273872 1193189 logs.go:282] 0 containers: []
	W1209 04:34:56.273879 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:34:56.273884 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:34:56.273946 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:34:56.299990 1193189 cri.go:89] found id: ""
	I1209 04:34:56.300004 1193189 logs.go:282] 0 containers: []
	W1209 04:34:56.300036 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:34:56.300042 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:34:56.300109 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:34:56.325952 1193189 cri.go:89] found id: ""
	I1209 04:34:56.325965 1193189 logs.go:282] 0 containers: []
	W1209 04:34:56.325972 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:34:56.325977 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:34:56.326044 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:34:56.349999 1193189 cri.go:89] found id: ""
	I1209 04:34:56.350013 1193189 logs.go:282] 0 containers: []
	W1209 04:34:56.350020 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:34:56.350025 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:34:56.350088 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:34:56.376083 1193189 cri.go:89] found id: ""
	I1209 04:34:56.376097 1193189 logs.go:282] 0 containers: []
	W1209 04:34:56.376104 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:34:56.376109 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:34:56.376177 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:34:56.400259 1193189 cri.go:89] found id: ""
	I1209 04:34:56.400273 1193189 logs.go:282] 0 containers: []
	W1209 04:34:56.400280 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:34:56.400293 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:34:56.400352 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:34:56.424757 1193189 cri.go:89] found id: ""
	I1209 04:34:56.424777 1193189 logs.go:282] 0 containers: []
	W1209 04:34:56.424784 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:34:56.424792 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:34:56.424802 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:34:56.453832 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:34:56.453849 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:34:56.512444 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:34:56.512463 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:34:56.531303 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:34:56.531322 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:34:56.595582 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:34:56.587456   12094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:56.588255   12094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:56.589902   12094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:56.590193   12094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:56.591722   12094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:34:56.587456   12094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:56.588255   12094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:56.589902   12094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:56.590193   12094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:56.591722   12094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:34:56.595592 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:34:56.595602 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:34:59.163281 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:59.173117 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:34:59.173176 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:34:59.206232 1193189 cri.go:89] found id: ""
	I1209 04:34:59.206246 1193189 logs.go:282] 0 containers: []
	W1209 04:34:59.206253 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:34:59.206257 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:34:59.206321 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:34:59.239889 1193189 cri.go:89] found id: ""
	I1209 04:34:59.239903 1193189 logs.go:282] 0 containers: []
	W1209 04:34:59.239910 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:34:59.239915 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:34:59.239977 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:34:59.268932 1193189 cri.go:89] found id: ""
	I1209 04:34:59.268946 1193189 logs.go:282] 0 containers: []
	W1209 04:34:59.268953 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:34:59.268958 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:34:59.269019 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:34:59.293191 1193189 cri.go:89] found id: ""
	I1209 04:34:59.293205 1193189 logs.go:282] 0 containers: []
	W1209 04:34:59.293211 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:34:59.293217 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:34:59.293279 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:34:59.317923 1193189 cri.go:89] found id: ""
	I1209 04:34:59.317936 1193189 logs.go:282] 0 containers: []
	W1209 04:34:59.317943 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:34:59.317948 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:34:59.318009 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:34:59.342336 1193189 cri.go:89] found id: ""
	I1209 04:34:59.342350 1193189 logs.go:282] 0 containers: []
	W1209 04:34:59.342356 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:34:59.342361 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:34:59.342419 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:34:59.366502 1193189 cri.go:89] found id: ""
	I1209 04:34:59.366517 1193189 logs.go:282] 0 containers: []
	W1209 04:34:59.366524 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:34:59.366532 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:34:59.366542 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:34:59.422133 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:34:59.422153 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:34:59.439160 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:34:59.439187 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:34:59.506261 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:34:59.497371   12189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:59.498039   12189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:59.499661   12189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:59.500189   12189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:59.501847   12189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:34:59.497371   12189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:59.498039   12189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:59.499661   12189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:59.500189   12189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:59.501847   12189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:34:59.506271 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:34:59.506282 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:34:59.575415 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:34:59.575436 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:35:02.103491 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:35:02.113633 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:35:02.113694 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:35:02.144619 1193189 cri.go:89] found id: ""
	I1209 04:35:02.144633 1193189 logs.go:282] 0 containers: []
	W1209 04:35:02.144640 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:35:02.144646 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:35:02.144705 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:35:02.170344 1193189 cri.go:89] found id: ""
	I1209 04:35:02.170361 1193189 logs.go:282] 0 containers: []
	W1209 04:35:02.170368 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:35:02.170373 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:35:02.170433 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:35:02.197667 1193189 cri.go:89] found id: ""
	I1209 04:35:02.197691 1193189 logs.go:282] 0 containers: []
	W1209 04:35:02.197699 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:35:02.197704 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:35:02.197776 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:35:02.234579 1193189 cri.go:89] found id: ""
	I1209 04:35:02.234593 1193189 logs.go:282] 0 containers: []
	W1209 04:35:02.234600 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:35:02.234605 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:35:02.234676 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:35:02.261734 1193189 cri.go:89] found id: ""
	I1209 04:35:02.261750 1193189 logs.go:282] 0 containers: []
	W1209 04:35:02.261757 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:35:02.261763 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:35:02.261840 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:35:02.287117 1193189 cri.go:89] found id: ""
	I1209 04:35:02.287132 1193189 logs.go:282] 0 containers: []
	W1209 04:35:02.287149 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:35:02.287155 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:35:02.287215 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:35:02.316821 1193189 cri.go:89] found id: ""
	I1209 04:35:02.316841 1193189 logs.go:282] 0 containers: []
	W1209 04:35:02.316887 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:35:02.316894 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:35:02.316908 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:35:02.374344 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:35:02.374364 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:35:02.391657 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:35:02.391675 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:35:02.456609 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:35:02.448842   12295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:02.449370   12295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:02.450865   12295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:02.451343   12295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:02.452897   12295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:35:02.448842   12295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:02.449370   12295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:02.450865   12295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:02.451343   12295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:02.452897   12295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:35:02.456619 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:35:02.456630 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:35:02.522522 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:35:02.522544 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:35:05.052204 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:35:05.062711 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:35:05.062783 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:35:05.088683 1193189 cri.go:89] found id: ""
	I1209 04:35:05.088699 1193189 logs.go:282] 0 containers: []
	W1209 04:35:05.088708 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:35:05.088714 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:35:05.088786 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:35:05.114558 1193189 cri.go:89] found id: ""
	I1209 04:35:05.114573 1193189 logs.go:282] 0 containers: []
	W1209 04:35:05.114580 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:35:05.114585 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:35:05.114647 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:35:05.139679 1193189 cri.go:89] found id: ""
	I1209 04:35:05.139694 1193189 logs.go:282] 0 containers: []
	W1209 04:35:05.139701 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:35:05.139713 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:35:05.139785 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:35:05.165102 1193189 cri.go:89] found id: ""
	I1209 04:35:05.165116 1193189 logs.go:282] 0 containers: []
	W1209 04:35:05.165123 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:35:05.165129 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:35:05.165200 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:35:05.193330 1193189 cri.go:89] found id: ""
	I1209 04:35:05.193354 1193189 logs.go:282] 0 containers: []
	W1209 04:35:05.193361 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:35:05.193366 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:35:05.193434 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:35:05.225572 1193189 cri.go:89] found id: ""
	I1209 04:35:05.225602 1193189 logs.go:282] 0 containers: []
	W1209 04:35:05.225610 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:35:05.225615 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:35:05.225684 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:35:05.253111 1193189 cri.go:89] found id: ""
	I1209 04:35:05.253125 1193189 logs.go:282] 0 containers: []
	W1209 04:35:05.253134 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:35:05.253142 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:35:05.253151 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:35:05.311870 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:35:05.311891 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:35:05.329165 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:35:05.329181 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:35:05.403755 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:35:05.395160   12402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:05.395840   12402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:05.397743   12402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:05.398247   12402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:05.399758   12402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:35:05.395160   12402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:05.395840   12402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:05.397743   12402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:05.398247   12402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:05.399758   12402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:35:05.403765 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:35:05.403778 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:35:05.466140 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:35:05.466163 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:35:08.001482 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:35:08.012555 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:35:08.012621 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:35:08.038489 1193189 cri.go:89] found id: ""
	I1209 04:35:08.038502 1193189 logs.go:282] 0 containers: []
	W1209 04:35:08.038510 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:35:08.038515 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:35:08.038577 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:35:08.063791 1193189 cri.go:89] found id: ""
	I1209 04:35:08.063806 1193189 logs.go:282] 0 containers: []
	W1209 04:35:08.063813 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:35:08.063819 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:35:08.063883 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:35:08.088918 1193189 cri.go:89] found id: ""
	I1209 04:35:08.088933 1193189 logs.go:282] 0 containers: []
	W1209 04:35:08.088940 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:35:08.088945 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:35:08.089006 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:35:08.113601 1193189 cri.go:89] found id: ""
	I1209 04:35:08.113614 1193189 logs.go:282] 0 containers: []
	W1209 04:35:08.113623 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:35:08.113628 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:35:08.113684 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:35:08.136899 1193189 cri.go:89] found id: ""
	I1209 04:35:08.136912 1193189 logs.go:282] 0 containers: []
	W1209 04:35:08.136924 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:35:08.136929 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:35:08.136988 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:35:08.160001 1193189 cri.go:89] found id: ""
	I1209 04:35:08.160050 1193189 logs.go:282] 0 containers: []
	W1209 04:35:08.160057 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:35:08.160062 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:35:08.160119 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:35:08.193362 1193189 cri.go:89] found id: ""
	I1209 04:35:08.193375 1193189 logs.go:282] 0 containers: []
	W1209 04:35:08.193382 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:35:08.193390 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:35:08.193400 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:35:08.255924 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:35:08.255942 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:35:08.274860 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:35:08.274876 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:35:08.341852 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:35:08.333782   12503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:08.334529   12503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:08.336277   12503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:08.336676   12503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:08.338082   12503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:35:08.333782   12503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:08.334529   12503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:08.336277   12503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:08.336676   12503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:08.338082   12503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:35:08.341863 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:35:08.341875 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:35:08.402199 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:35:08.402217 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:35:10.929478 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:35:10.939723 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:35:10.939784 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:35:10.964690 1193189 cri.go:89] found id: ""
	I1209 04:35:10.964704 1193189 logs.go:282] 0 containers: []
	W1209 04:35:10.964711 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:35:10.964716 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:35:10.964796 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:35:10.993239 1193189 cri.go:89] found id: ""
	I1209 04:35:10.993253 1193189 logs.go:282] 0 containers: []
	W1209 04:35:10.993260 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:35:10.993265 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:35:10.993323 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:35:11.019779 1193189 cri.go:89] found id: ""
	I1209 04:35:11.019793 1193189 logs.go:282] 0 containers: []
	W1209 04:35:11.019800 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:35:11.019805 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:35:11.019867 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:35:11.044082 1193189 cri.go:89] found id: ""
	I1209 04:35:11.044095 1193189 logs.go:282] 0 containers: []
	W1209 04:35:11.044104 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:35:11.044109 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:35:11.044170 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:35:11.067732 1193189 cri.go:89] found id: ""
	I1209 04:35:11.067746 1193189 logs.go:282] 0 containers: []
	W1209 04:35:11.067753 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:35:11.067758 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:35:11.067827 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:35:11.094131 1193189 cri.go:89] found id: ""
	I1209 04:35:11.094145 1193189 logs.go:282] 0 containers: []
	W1209 04:35:11.094152 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:35:11.094157 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:35:11.094217 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:35:11.120246 1193189 cri.go:89] found id: ""
	I1209 04:35:11.120261 1193189 logs.go:282] 0 containers: []
	W1209 04:35:11.120269 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:35:11.120277 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:35:11.120288 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:35:11.188699 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:35:11.188719 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:35:11.220249 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:35:11.220272 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:35:11.281813 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:35:11.281834 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:35:11.299608 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:35:11.299624 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:35:11.364974 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:35:11.356357   12622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:11.357044   12622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:11.358808   12622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:11.359431   12622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:11.361160   12622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:35:11.356357   12622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:11.357044   12622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:11.358808   12622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:11.359431   12622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:11.361160   12622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:35:13.865252 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:35:13.875906 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:35:13.875966 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:35:13.901925 1193189 cri.go:89] found id: ""
	I1209 04:35:13.901941 1193189 logs.go:282] 0 containers: []
	W1209 04:35:13.901947 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:35:13.901953 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:35:13.902023 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:35:13.929808 1193189 cri.go:89] found id: ""
	I1209 04:35:13.929823 1193189 logs.go:282] 0 containers: []
	W1209 04:35:13.929830 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:35:13.929835 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:35:13.929896 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:35:13.955030 1193189 cri.go:89] found id: ""
	I1209 04:35:13.955045 1193189 logs.go:282] 0 containers: []
	W1209 04:35:13.955051 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:35:13.955056 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:35:13.955114 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:35:13.979829 1193189 cri.go:89] found id: ""
	I1209 04:35:13.979843 1193189 logs.go:282] 0 containers: []
	W1209 04:35:13.979849 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:35:13.979854 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:35:13.979918 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:35:14.007254 1193189 cri.go:89] found id: ""
	I1209 04:35:14.007269 1193189 logs.go:282] 0 containers: []
	W1209 04:35:14.007275 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:35:14.007281 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:35:14.007345 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:35:14.032915 1193189 cri.go:89] found id: ""
	I1209 04:35:14.032929 1193189 logs.go:282] 0 containers: []
	W1209 04:35:14.032936 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:35:14.032941 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:35:14.032999 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:35:14.061801 1193189 cri.go:89] found id: ""
	I1209 04:35:14.061826 1193189 logs.go:282] 0 containers: []
	W1209 04:35:14.061834 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:35:14.061842 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:35:14.061853 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:35:14.125545 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:35:14.117510   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:14.118249   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:14.119815   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:14.120178   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:14.121732   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:35:14.117510   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:14.118249   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:14.119815   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:14.120178   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:14.121732   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:35:14.125555 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:35:14.125569 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:35:14.192586 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:35:14.192605 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:35:14.223400 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:35:14.223417 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:35:14.284525 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:35:14.284545 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:35:16.802913 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:35:16.812669 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:35:16.812730 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:35:16.836304 1193189 cri.go:89] found id: ""
	I1209 04:35:16.836318 1193189 logs.go:282] 0 containers: []
	W1209 04:35:16.836324 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:35:16.836329 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:35:16.836386 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:35:16.861382 1193189 cri.go:89] found id: ""
	I1209 04:35:16.861396 1193189 logs.go:282] 0 containers: []
	W1209 04:35:16.861403 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:35:16.861407 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:35:16.861467 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:35:16.884827 1193189 cri.go:89] found id: ""
	I1209 04:35:16.884841 1193189 logs.go:282] 0 containers: []
	W1209 04:35:16.884848 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:35:16.884853 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:35:16.884913 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:35:16.907933 1193189 cri.go:89] found id: ""
	I1209 04:35:16.907946 1193189 logs.go:282] 0 containers: []
	W1209 04:35:16.907953 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:35:16.907959 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:35:16.908028 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:35:16.933329 1193189 cri.go:89] found id: ""
	I1209 04:35:16.933344 1193189 logs.go:282] 0 containers: []
	W1209 04:35:16.933350 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:35:16.933355 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:35:16.933418 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:35:16.957725 1193189 cri.go:89] found id: ""
	I1209 04:35:16.957739 1193189 logs.go:282] 0 containers: []
	W1209 04:35:16.957745 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:35:16.957751 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:35:16.957807 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:35:16.981209 1193189 cri.go:89] found id: ""
	I1209 04:35:16.981223 1193189 logs.go:282] 0 containers: []
	W1209 04:35:16.981231 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:35:16.981240 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:35:16.981249 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:35:17.039472 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:35:17.039491 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:35:17.056497 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:35:17.056514 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:35:17.119231 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:35:17.111277   12811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:17.111948   12811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:17.113585   12811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:17.114023   12811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:17.115511   12811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:35:17.111277   12811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:17.111948   12811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:17.113585   12811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:17.114023   12811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:17.115511   12811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:35:17.119240 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:35:17.119251 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:35:17.181494 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:35:17.181513 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:35:19.709396 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:35:19.719323 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:35:19.719388 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:35:19.743245 1193189 cri.go:89] found id: ""
	I1209 04:35:19.743259 1193189 logs.go:282] 0 containers: []
	W1209 04:35:19.743266 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:35:19.743271 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:35:19.743328 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:35:19.767566 1193189 cri.go:89] found id: ""
	I1209 04:35:19.767581 1193189 logs.go:282] 0 containers: []
	W1209 04:35:19.767587 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:35:19.767592 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:35:19.767649 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:35:19.797227 1193189 cri.go:89] found id: ""
	I1209 04:35:19.797241 1193189 logs.go:282] 0 containers: []
	W1209 04:35:19.797248 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:35:19.797253 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:35:19.797311 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:35:19.820451 1193189 cri.go:89] found id: ""
	I1209 04:35:19.820465 1193189 logs.go:282] 0 containers: []
	W1209 04:35:19.820471 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:35:19.820477 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:35:19.820534 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:35:19.844577 1193189 cri.go:89] found id: ""
	I1209 04:35:19.844591 1193189 logs.go:282] 0 containers: []
	W1209 04:35:19.844597 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:35:19.844603 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:35:19.844661 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:35:19.868336 1193189 cri.go:89] found id: ""
	I1209 04:35:19.868350 1193189 logs.go:282] 0 containers: []
	W1209 04:35:19.868356 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:35:19.868362 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:35:19.868430 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:35:19.893016 1193189 cri.go:89] found id: ""
	I1209 04:35:19.893030 1193189 logs.go:282] 0 containers: []
	W1209 04:35:19.893037 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:35:19.893045 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:35:19.893055 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:35:19.947540 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:35:19.947561 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:35:19.964623 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:35:19.964640 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:35:20.041799 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:35:20.033487   12917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:20.034256   12917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:20.035804   12917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:20.036289   12917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:20.037849   12917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:35:20.033487   12917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:20.034256   12917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:20.035804   12917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:20.036289   12917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:20.037849   12917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:35:20.041809 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:35:20.041829 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:35:20.106338 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:35:20.106361 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:35:22.634358 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:35:22.644145 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:35:22.644208 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:35:22.670154 1193189 cri.go:89] found id: ""
	I1209 04:35:22.670171 1193189 logs.go:282] 0 containers: []
	W1209 04:35:22.670178 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:35:22.670189 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:35:22.670255 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:35:22.704705 1193189 cri.go:89] found id: ""
	I1209 04:35:22.704724 1193189 logs.go:282] 0 containers: []
	W1209 04:35:22.704731 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:35:22.704742 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:35:22.704815 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:35:22.729994 1193189 cri.go:89] found id: ""
	I1209 04:35:22.730010 1193189 logs.go:282] 0 containers: []
	W1209 04:35:22.730016 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:35:22.730021 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:35:22.730085 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:35:22.755372 1193189 cri.go:89] found id: ""
	I1209 04:35:22.755386 1193189 logs.go:282] 0 containers: []
	W1209 04:35:22.755393 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:35:22.755399 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:35:22.755468 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:35:22.781698 1193189 cri.go:89] found id: ""
	I1209 04:35:22.781712 1193189 logs.go:282] 0 containers: []
	W1209 04:35:22.781718 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:35:22.781724 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:35:22.781783 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:35:22.810395 1193189 cri.go:89] found id: ""
	I1209 04:35:22.810409 1193189 logs.go:282] 0 containers: []
	W1209 04:35:22.810417 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:35:22.810422 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:35:22.810491 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:35:22.834867 1193189 cri.go:89] found id: ""
	I1209 04:35:22.834881 1193189 logs.go:282] 0 containers: []
	W1209 04:35:22.834888 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:35:22.834896 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:35:22.834914 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:35:22.895493 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:35:22.895514 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:35:22.923338 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:35:22.923355 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:35:22.981048 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:35:22.981069 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:35:22.998202 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:35:22.998221 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:35:23.060221 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:35:23.052398   13038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:23.053078   13038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:23.054527   13038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:23.054989   13038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:23.056396   13038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:35:23.052398   13038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:23.053078   13038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:23.054527   13038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:23.054989   13038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:23.056396   13038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:35:25.561920 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:35:25.571773 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:35:25.571837 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:35:25.595193 1193189 cri.go:89] found id: ""
	I1209 04:35:25.595207 1193189 logs.go:282] 0 containers: []
	W1209 04:35:25.595215 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:35:25.595220 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:35:25.595285 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:35:25.619637 1193189 cri.go:89] found id: ""
	I1209 04:35:25.619651 1193189 logs.go:282] 0 containers: []
	W1209 04:35:25.619658 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:35:25.619664 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:35:25.619726 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:35:25.644298 1193189 cri.go:89] found id: ""
	I1209 04:35:25.644313 1193189 logs.go:282] 0 containers: []
	W1209 04:35:25.644319 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:35:25.644325 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:35:25.644384 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:35:25.668990 1193189 cri.go:89] found id: ""
	I1209 04:35:25.669003 1193189 logs.go:282] 0 containers: []
	W1209 04:35:25.669011 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:35:25.669016 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:35:25.669078 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:35:25.693184 1193189 cri.go:89] found id: ""
	I1209 04:35:25.693199 1193189 logs.go:282] 0 containers: []
	W1209 04:35:25.693206 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:35:25.693211 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:35:25.693269 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:35:25.718924 1193189 cri.go:89] found id: ""
	I1209 04:35:25.718939 1193189 logs.go:282] 0 containers: []
	W1209 04:35:25.718946 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:35:25.718951 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:35:25.719014 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:35:25.744270 1193189 cri.go:89] found id: ""
	I1209 04:35:25.744287 1193189 logs.go:282] 0 containers: []
	W1209 04:35:25.744294 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:35:25.744303 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:35:25.744313 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:35:25.775297 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:35:25.775312 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:35:25.830399 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:35:25.830417 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:35:25.846995 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:35:25.847011 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:35:25.907973 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:35:25.899536   13140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:25.899964   13140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:25.901112   13140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:25.902600   13140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:25.903121   13140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:35:25.899536   13140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:25.899964   13140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:25.901112   13140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:25.902600   13140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:25.903121   13140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:35:25.908000 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:35:25.908009 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:35:28.475800 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:35:28.486363 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:35:28.486434 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:35:28.511630 1193189 cri.go:89] found id: ""
	I1209 04:35:28.511649 1193189 logs.go:282] 0 containers: []
	W1209 04:35:28.511657 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:35:28.511662 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:35:28.511734 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:35:28.539616 1193189 cri.go:89] found id: ""
	I1209 04:35:28.539631 1193189 logs.go:282] 0 containers: []
	W1209 04:35:28.539638 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:35:28.539643 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:35:28.539704 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:35:28.563311 1193189 cri.go:89] found id: ""
	I1209 04:35:28.563325 1193189 logs.go:282] 0 containers: []
	W1209 04:35:28.563333 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:35:28.563338 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:35:28.563399 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:35:28.591490 1193189 cri.go:89] found id: ""
	I1209 04:35:28.591504 1193189 logs.go:282] 0 containers: []
	W1209 04:35:28.591511 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:35:28.591516 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:35:28.591574 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:35:28.614638 1193189 cri.go:89] found id: ""
	I1209 04:35:28.614653 1193189 logs.go:282] 0 containers: []
	W1209 04:35:28.614660 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:35:28.614665 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:35:28.614729 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:35:28.638698 1193189 cri.go:89] found id: ""
	I1209 04:35:28.638712 1193189 logs.go:282] 0 containers: []
	W1209 04:35:28.638720 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:35:28.638727 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:35:28.638788 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:35:28.665819 1193189 cri.go:89] found id: ""
	I1209 04:35:28.665837 1193189 logs.go:282] 0 containers: []
	W1209 04:35:28.665843 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:35:28.665851 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:35:28.665861 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:35:28.693372 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:35:28.693387 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:35:28.750183 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:35:28.750203 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:35:28.768641 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:35:28.768659 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:35:28.832332 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:35:28.823785   13239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:28.824261   13239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:28.826084   13239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:28.826770   13239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:28.828406   13239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:35:28.823785   13239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:28.824261   13239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:28.826084   13239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:28.826770   13239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:28.828406   13239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:35:28.832342 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:35:28.832352 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:35:31.394797 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:35:31.404399 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:35:31.404459 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:35:31.427865 1193189 cri.go:89] found id: ""
	I1209 04:35:31.427879 1193189 logs.go:282] 0 containers: []
	W1209 04:35:31.427886 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:35:31.427893 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:35:31.427957 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:35:31.465245 1193189 cri.go:89] found id: ""
	I1209 04:35:31.465259 1193189 logs.go:282] 0 containers: []
	W1209 04:35:31.465266 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:35:31.465271 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:35:31.465333 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:35:31.499189 1193189 cri.go:89] found id: ""
	I1209 04:35:31.499202 1193189 logs.go:282] 0 containers: []
	W1209 04:35:31.499209 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:35:31.499215 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:35:31.499272 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:35:31.525936 1193189 cri.go:89] found id: ""
	I1209 04:35:31.525950 1193189 logs.go:282] 0 containers: []
	W1209 04:35:31.525958 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:35:31.525963 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:35:31.526023 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:35:31.550933 1193189 cri.go:89] found id: ""
	I1209 04:35:31.550948 1193189 logs.go:282] 0 containers: []
	W1209 04:35:31.550955 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:35:31.550960 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:35:31.551019 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:35:31.574667 1193189 cri.go:89] found id: ""
	I1209 04:35:31.574681 1193189 logs.go:282] 0 containers: []
	W1209 04:35:31.574689 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:35:31.574694 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:35:31.574754 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:35:31.599346 1193189 cri.go:89] found id: ""
	I1209 04:35:31.599360 1193189 logs.go:282] 0 containers: []
	W1209 04:35:31.599367 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:35:31.599374 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:35:31.599384 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:35:31.625893 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:35:31.625912 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:35:31.681164 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:35:31.681181 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:35:31.697997 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:35:31.698014 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:35:31.765231 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:35:31.757080   13343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:31.757463   13343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:31.759010   13343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:31.759311   13343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:31.760784   13343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:35:31.757080   13343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:31.757463   13343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:31.759010   13343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:31.759311   13343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:31.760784   13343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:35:31.765242 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:35:31.765253 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:35:34.325149 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:35:34.334839 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:35:34.334897 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:35:34.359238 1193189 cri.go:89] found id: ""
	I1209 04:35:34.359251 1193189 logs.go:282] 0 containers: []
	W1209 04:35:34.359258 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:35:34.359263 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:35:34.359324 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:35:34.383217 1193189 cri.go:89] found id: ""
	I1209 04:35:34.383231 1193189 logs.go:282] 0 containers: []
	W1209 04:35:34.383237 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:35:34.383242 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:35:34.383301 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:35:34.407421 1193189 cri.go:89] found id: ""
	I1209 04:35:34.407435 1193189 logs.go:282] 0 containers: []
	W1209 04:35:34.407442 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:35:34.407454 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:35:34.407513 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:35:34.440852 1193189 cri.go:89] found id: ""
	I1209 04:35:34.440865 1193189 logs.go:282] 0 containers: []
	W1209 04:35:34.440872 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:35:34.440878 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:35:34.440938 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:35:34.474370 1193189 cri.go:89] found id: ""
	I1209 04:35:34.474382 1193189 logs.go:282] 0 containers: []
	W1209 04:35:34.474389 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:35:34.474400 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:35:34.474459 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:35:34.503074 1193189 cri.go:89] found id: ""
	I1209 04:35:34.503088 1193189 logs.go:282] 0 containers: []
	W1209 04:35:34.503095 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:35:34.503103 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:35:34.503160 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:35:34.533672 1193189 cri.go:89] found id: ""
	I1209 04:35:34.533686 1193189 logs.go:282] 0 containers: []
	W1209 04:35:34.533693 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:35:34.533701 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:35:34.533711 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:35:34.550119 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:35:34.550138 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:35:34.614817 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:35:34.606452   13437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:34.606849   13437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:34.608481   13437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:34.609159   13437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:34.610864   13437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:35:34.606452   13437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:34.606849   13437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:34.608481   13437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:34.609159   13437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:34.610864   13437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:35:34.614827 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:35:34.614837 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:35:34.677461 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:35:34.677482 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:35:34.703505 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:35:34.703520 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:35:37.258780 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:35:37.268941 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:35:37.269002 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:35:37.292668 1193189 cri.go:89] found id: ""
	I1209 04:35:37.292682 1193189 logs.go:282] 0 containers: []
	W1209 04:35:37.292689 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:35:37.292694 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:35:37.292757 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:35:37.320157 1193189 cri.go:89] found id: ""
	I1209 04:35:37.320171 1193189 logs.go:282] 0 containers: []
	W1209 04:35:37.320177 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:35:37.320183 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:35:37.320240 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:35:37.343858 1193189 cri.go:89] found id: ""
	I1209 04:35:37.343872 1193189 logs.go:282] 0 containers: []
	W1209 04:35:37.343879 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:35:37.343884 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:35:37.343947 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:35:37.366919 1193189 cri.go:89] found id: ""
	I1209 04:35:37.366932 1193189 logs.go:282] 0 containers: []
	W1209 04:35:37.366939 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:35:37.366945 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:35:37.367003 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:35:37.391330 1193189 cri.go:89] found id: ""
	I1209 04:35:37.391344 1193189 logs.go:282] 0 containers: []
	W1209 04:35:37.391351 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:35:37.391356 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:35:37.391417 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:35:37.414885 1193189 cri.go:89] found id: ""
	I1209 04:35:37.414899 1193189 logs.go:282] 0 containers: []
	W1209 04:35:37.414906 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:35:37.414911 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:35:37.414967 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:35:37.440557 1193189 cri.go:89] found id: ""
	I1209 04:35:37.440570 1193189 logs.go:282] 0 containers: []
	W1209 04:35:37.440577 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:35:37.440585 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:35:37.440595 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:35:37.501076 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:35:37.501094 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:35:37.523552 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:35:37.523569 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:35:37.590387 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:35:37.582017   13547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:37.582700   13547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:37.584424   13547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:37.584939   13547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:37.586547   13547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:35:37.582017   13547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:37.582700   13547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:37.584424   13547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:37.584939   13547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:37.586547   13547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:35:37.590397 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:35:37.590408 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:35:37.653090 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:35:37.653108 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:35:40.184839 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:35:40.195112 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:35:40.195177 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:35:40.221158 1193189 cri.go:89] found id: ""
	I1209 04:35:40.221173 1193189 logs.go:282] 0 containers: []
	W1209 04:35:40.221180 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:35:40.221185 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:35:40.221246 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:35:40.246395 1193189 cri.go:89] found id: ""
	I1209 04:35:40.246415 1193189 logs.go:282] 0 containers: []
	W1209 04:35:40.246422 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:35:40.246428 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:35:40.246487 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:35:40.270697 1193189 cri.go:89] found id: ""
	I1209 04:35:40.270711 1193189 logs.go:282] 0 containers: []
	W1209 04:35:40.270718 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:35:40.270723 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:35:40.270781 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:35:40.295006 1193189 cri.go:89] found id: ""
	I1209 04:35:40.295021 1193189 logs.go:282] 0 containers: []
	W1209 04:35:40.295028 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:35:40.295033 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:35:40.295093 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:35:40.319784 1193189 cri.go:89] found id: ""
	I1209 04:35:40.319797 1193189 logs.go:282] 0 containers: []
	W1209 04:35:40.319804 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:35:40.319810 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:35:40.319872 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:35:40.344094 1193189 cri.go:89] found id: ""
	I1209 04:35:40.344108 1193189 logs.go:282] 0 containers: []
	W1209 04:35:40.344115 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:35:40.344120 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:35:40.344181 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:35:40.368626 1193189 cri.go:89] found id: ""
	I1209 04:35:40.368640 1193189 logs.go:282] 0 containers: []
	W1209 04:35:40.368647 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:35:40.368654 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:35:40.368665 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:35:40.423837 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:35:40.423857 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:35:40.452134 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:35:40.452157 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:35:40.527559 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:35:40.519583   13649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:40.519986   13649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:40.521271   13649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:40.521835   13649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:40.523570   13649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:35:40.519583   13649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:40.519986   13649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:40.521271   13649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:40.521835   13649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:40.523570   13649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:35:40.527610 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:35:40.527620 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:35:40.588474 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:35:40.588495 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:35:43.118634 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:35:43.128671 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:35:43.128738 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:35:43.152143 1193189 cri.go:89] found id: ""
	I1209 04:35:43.152158 1193189 logs.go:282] 0 containers: []
	W1209 04:35:43.152179 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:35:43.152185 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:35:43.152255 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:35:43.176188 1193189 cri.go:89] found id: ""
	I1209 04:35:43.176203 1193189 logs.go:282] 0 containers: []
	W1209 04:35:43.176210 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:35:43.176215 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:35:43.176275 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:35:43.199682 1193189 cri.go:89] found id: ""
	I1209 04:35:43.199696 1193189 logs.go:282] 0 containers: []
	W1209 04:35:43.199702 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:35:43.199707 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:35:43.199767 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:35:43.224229 1193189 cri.go:89] found id: ""
	I1209 04:35:43.224244 1193189 logs.go:282] 0 containers: []
	W1209 04:35:43.224251 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:35:43.224257 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:35:43.224318 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:35:43.249684 1193189 cri.go:89] found id: ""
	I1209 04:35:43.249698 1193189 logs.go:282] 0 containers: []
	W1209 04:35:43.249705 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:35:43.249710 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:35:43.249773 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:35:43.273701 1193189 cri.go:89] found id: ""
	I1209 04:35:43.273715 1193189 logs.go:282] 0 containers: []
	W1209 04:35:43.273724 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:35:43.273729 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:35:43.273790 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:35:43.297360 1193189 cri.go:89] found id: ""
	I1209 04:35:43.297375 1193189 logs.go:282] 0 containers: []
	W1209 04:35:43.297382 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:35:43.297389 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:35:43.297400 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:35:43.323849 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:35:43.323865 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:35:43.380806 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:35:43.380825 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:35:43.397905 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:35:43.397924 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:35:43.474648 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:35:43.464143   13758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:43.464857   13758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:43.468210   13758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:43.468799   13758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:43.470475   13758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:35:43.464143   13758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:43.464857   13758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:43.468210   13758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:43.468799   13758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:43.470475   13758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:35:43.474658 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:35:43.474668 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:35:46.038037 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:35:46.048448 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:35:46.048513 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:35:46.073156 1193189 cri.go:89] found id: ""
	I1209 04:35:46.073170 1193189 logs.go:282] 0 containers: []
	W1209 04:35:46.073177 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:35:46.073182 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:35:46.073246 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:35:46.103227 1193189 cri.go:89] found id: ""
	I1209 04:35:46.103242 1193189 logs.go:282] 0 containers: []
	W1209 04:35:46.103249 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:35:46.103255 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:35:46.103324 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:35:46.126371 1193189 cri.go:89] found id: ""
	I1209 04:35:46.126385 1193189 logs.go:282] 0 containers: []
	W1209 04:35:46.126392 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:35:46.126397 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:35:46.126457 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:35:46.151271 1193189 cri.go:89] found id: ""
	I1209 04:35:46.151284 1193189 logs.go:282] 0 containers: []
	W1209 04:35:46.151291 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:35:46.151296 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:35:46.151354 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:35:46.175057 1193189 cri.go:89] found id: ""
	I1209 04:35:46.175071 1193189 logs.go:282] 0 containers: []
	W1209 04:35:46.175077 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:35:46.175082 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:35:46.175140 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:35:46.203063 1193189 cri.go:89] found id: ""
	I1209 04:35:46.203078 1193189 logs.go:282] 0 containers: []
	W1209 04:35:46.203085 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:35:46.203091 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:35:46.203148 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:35:46.229251 1193189 cri.go:89] found id: ""
	I1209 04:35:46.229267 1193189 logs.go:282] 0 containers: []
	W1209 04:35:46.229274 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:35:46.229281 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:35:46.229291 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:35:46.298699 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:35:46.289900   13846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:46.290515   13846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:46.292304   13846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:46.292640   13846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:46.294235   13846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:35:46.289900   13846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:46.290515   13846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:46.292304   13846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:46.292640   13846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:46.294235   13846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:35:46.298709 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:35:46.298720 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:35:46.363949 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:35:46.363976 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:35:46.391889 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:35:46.391906 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:35:46.454456 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:35:46.454483 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:35:48.975649 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:35:48.985708 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:35:48.985766 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:35:49.011399 1193189 cri.go:89] found id: ""
	I1209 04:35:49.011413 1193189 logs.go:282] 0 containers: []
	W1209 04:35:49.011420 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:35:49.011426 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:35:49.011483 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:35:49.036873 1193189 cri.go:89] found id: ""
	I1209 04:35:49.036887 1193189 logs.go:282] 0 containers: []
	W1209 04:35:49.036894 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:35:49.036899 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:35:49.036960 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:35:49.066005 1193189 cri.go:89] found id: ""
	I1209 04:35:49.066019 1193189 logs.go:282] 0 containers: []
	W1209 04:35:49.066025 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:35:49.066031 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:35:49.066091 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:35:49.093270 1193189 cri.go:89] found id: ""
	I1209 04:35:49.093284 1193189 logs.go:282] 0 containers: []
	W1209 04:35:49.093291 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:35:49.093297 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:35:49.093357 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:35:49.116583 1193189 cri.go:89] found id: ""
	I1209 04:35:49.116597 1193189 logs.go:282] 0 containers: []
	W1209 04:35:49.116604 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:35:49.116609 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:35:49.116667 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:35:49.141295 1193189 cri.go:89] found id: ""
	I1209 04:35:49.141309 1193189 logs.go:282] 0 containers: []
	W1209 04:35:49.141316 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:35:49.141321 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:35:49.141382 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:35:49.164496 1193189 cri.go:89] found id: ""
	I1209 04:35:49.164509 1193189 logs.go:282] 0 containers: []
	W1209 04:35:49.164516 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:35:49.164524 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:35:49.164533 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:35:49.220406 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:35:49.220426 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:35:49.237143 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:35:49.237159 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:35:49.305702 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:35:49.296253   13955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:49.297596   13955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:49.298689   13955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:49.299456   13955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:49.301121   13955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:35:49.296253   13955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:49.297596   13955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:49.298689   13955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:49.299456   13955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:49.301121   13955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:35:49.305724 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:35:49.305737 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:35:49.367200 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:35:49.367219 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:35:51.895283 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:35:51.905706 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:35:51.905765 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:35:51.929677 1193189 cri.go:89] found id: ""
	I1209 04:35:51.929691 1193189 logs.go:282] 0 containers: []
	W1209 04:35:51.929698 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:35:51.929703 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:35:51.929764 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:35:51.953232 1193189 cri.go:89] found id: ""
	I1209 04:35:51.953246 1193189 logs.go:282] 0 containers: []
	W1209 04:35:51.953252 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:35:51.953257 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:35:51.953314 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:35:51.979515 1193189 cri.go:89] found id: ""
	I1209 04:35:51.979528 1193189 logs.go:282] 0 containers: []
	W1209 04:35:51.979535 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:35:51.979540 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:35:51.979601 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:35:52.009061 1193189 cri.go:89] found id: ""
	I1209 04:35:52.009075 1193189 logs.go:282] 0 containers: []
	W1209 04:35:52.009082 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:35:52.009087 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:35:52.009154 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:35:52.036289 1193189 cri.go:89] found id: ""
	I1209 04:35:52.036309 1193189 logs.go:282] 0 containers: []
	W1209 04:35:52.036316 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:35:52.036321 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:35:52.036386 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:35:52.061853 1193189 cri.go:89] found id: ""
	I1209 04:35:52.061867 1193189 logs.go:282] 0 containers: []
	W1209 04:35:52.061874 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:35:52.061879 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:35:52.061942 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:35:52.090416 1193189 cri.go:89] found id: ""
	I1209 04:35:52.090443 1193189 logs.go:282] 0 containers: []
	W1209 04:35:52.090451 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:35:52.090459 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:35:52.090469 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:35:52.120980 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:35:52.120996 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:35:52.177079 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:35:52.177098 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:35:52.195520 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:35:52.195537 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:35:52.260151 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:35:52.251913   14072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:52.252734   14072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:52.254403   14072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:52.254982   14072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:52.256470   14072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:35:52.251913   14072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:52.252734   14072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:52.254403   14072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:52.254982   14072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:52.256470   14072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:35:52.260161 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:35:52.260172 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:35:54.821803 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:35:54.831356 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:35:54.831415 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:35:54.855283 1193189 cri.go:89] found id: ""
	I1209 04:35:54.855298 1193189 logs.go:282] 0 containers: []
	W1209 04:35:54.855304 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:35:54.855309 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:35:54.855369 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:35:54.889160 1193189 cri.go:89] found id: ""
	I1209 04:35:54.889174 1193189 logs.go:282] 0 containers: []
	W1209 04:35:54.889181 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:35:54.889186 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:35:54.889245 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:35:54.912925 1193189 cri.go:89] found id: ""
	I1209 04:35:54.912939 1193189 logs.go:282] 0 containers: []
	W1209 04:35:54.912946 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:35:54.912951 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:35:54.913019 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:35:54.937856 1193189 cri.go:89] found id: ""
	I1209 04:35:54.937869 1193189 logs.go:282] 0 containers: []
	W1209 04:35:54.937876 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:35:54.937881 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:35:54.937939 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:35:54.961607 1193189 cri.go:89] found id: ""
	I1209 04:35:54.961620 1193189 logs.go:282] 0 containers: []
	W1209 04:35:54.961626 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:35:54.961632 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:35:54.961692 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:35:54.984614 1193189 cri.go:89] found id: ""
	I1209 04:35:54.984627 1193189 logs.go:282] 0 containers: []
	W1209 04:35:54.984634 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:35:54.984639 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:35:54.984702 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:35:55.019938 1193189 cri.go:89] found id: ""
	I1209 04:35:55.019952 1193189 logs.go:282] 0 containers: []
	W1209 04:35:55.019959 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:35:55.019967 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:35:55.019977 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:35:55.076703 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:35:55.076722 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:35:55.094781 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:35:55.094801 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:35:55.164076 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:35:55.155994   14165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:55.156899   14165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:55.158415   14165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:55.158819   14165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:55.160056   14165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:35:55.155994   14165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:55.156899   14165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:55.158415   14165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:55.158819   14165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:55.160056   14165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:35:55.164088 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:35:55.164098 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:35:55.225429 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:35:55.225451 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:35:57.756131 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:35:57.766096 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:35:57.766152 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:35:57.794059 1193189 cri.go:89] found id: ""
	I1209 04:35:57.794073 1193189 logs.go:282] 0 containers: []
	W1209 04:35:57.794080 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:35:57.794085 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:35:57.794142 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:35:57.817501 1193189 cri.go:89] found id: ""
	I1209 04:35:57.817514 1193189 logs.go:282] 0 containers: []
	W1209 04:35:57.817520 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:35:57.817526 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:35:57.817582 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:35:57.841800 1193189 cri.go:89] found id: ""
	I1209 04:35:57.841814 1193189 logs.go:282] 0 containers: []
	W1209 04:35:57.841821 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:35:57.841841 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:35:57.841905 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:35:57.865096 1193189 cri.go:89] found id: ""
	I1209 04:35:57.865109 1193189 logs.go:282] 0 containers: []
	W1209 04:35:57.865116 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:35:57.865122 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:35:57.865185 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:35:57.889214 1193189 cri.go:89] found id: ""
	I1209 04:35:57.889227 1193189 logs.go:282] 0 containers: []
	W1209 04:35:57.889234 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:35:57.889240 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:35:57.889299 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:35:57.913077 1193189 cri.go:89] found id: ""
	I1209 04:35:57.913090 1193189 logs.go:282] 0 containers: []
	W1209 04:35:57.913097 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:35:57.913102 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:35:57.913164 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:35:57.938101 1193189 cri.go:89] found id: ""
	I1209 04:35:57.938114 1193189 logs.go:282] 0 containers: []
	W1209 04:35:57.938121 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:35:57.938129 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:35:57.938139 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:35:57.968546 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:35:57.968563 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:35:58.025605 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:35:58.025626 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:35:58.042537 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:35:58.042554 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:35:58.112285 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:35:58.104144   14281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:58.104837   14281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:58.106385   14281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:58.106802   14281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:58.108456   14281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:35:58.104144   14281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:58.104837   14281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:58.106385   14281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:58.106802   14281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:58.108456   14281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:35:58.112295 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:35:58.112317 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:36:00.674623 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:36:00.684871 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:36:00.684932 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:36:00.723046 1193189 cri.go:89] found id: ""
	I1209 04:36:00.723060 1193189 logs.go:282] 0 containers: []
	W1209 04:36:00.723067 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:36:00.723082 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:36:00.723142 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:36:00.755063 1193189 cri.go:89] found id: ""
	I1209 04:36:00.755077 1193189 logs.go:282] 0 containers: []
	W1209 04:36:00.755094 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:36:00.755100 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:36:00.755170 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:36:00.780343 1193189 cri.go:89] found id: ""
	I1209 04:36:00.780357 1193189 logs.go:282] 0 containers: []
	W1209 04:36:00.780368 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:36:00.780373 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:36:00.780432 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:36:00.805177 1193189 cri.go:89] found id: ""
	I1209 04:36:00.805191 1193189 logs.go:282] 0 containers: []
	W1209 04:36:00.805198 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:36:00.805203 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:36:00.805261 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:36:00.829413 1193189 cri.go:89] found id: ""
	I1209 04:36:00.829426 1193189 logs.go:282] 0 containers: []
	W1209 04:36:00.829432 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:36:00.829439 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:36:00.829500 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:36:00.853086 1193189 cri.go:89] found id: ""
	I1209 04:36:00.853100 1193189 logs.go:282] 0 containers: []
	W1209 04:36:00.853107 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:36:00.853112 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:36:00.853185 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:36:00.881064 1193189 cri.go:89] found id: ""
	I1209 04:36:00.881078 1193189 logs.go:282] 0 containers: []
	W1209 04:36:00.881085 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:36:00.881093 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:36:00.881103 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:36:00.950102 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:36:00.942130   14365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:00.942767   14365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:00.944430   14365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:00.944779   14365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:00.946290   14365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:36:00.942130   14365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:00.942767   14365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:00.944430   14365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:00.944779   14365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:00.946290   14365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:36:00.950112 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:36:00.950123 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:36:01.012065 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:36:01.012086 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:36:01.041323 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:36:01.041339 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:36:01.099024 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:36:01.099044 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:36:03.616785 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:36:03.626636 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:36:03.626697 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:36:03.650973 1193189 cri.go:89] found id: ""
	I1209 04:36:03.650987 1193189 logs.go:282] 0 containers: []
	W1209 04:36:03.650994 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:36:03.650999 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:36:03.651060 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:36:03.674678 1193189 cri.go:89] found id: ""
	I1209 04:36:03.674692 1193189 logs.go:282] 0 containers: []
	W1209 04:36:03.674699 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:36:03.674705 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:36:03.674777 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:36:03.705193 1193189 cri.go:89] found id: ""
	I1209 04:36:03.705206 1193189 logs.go:282] 0 containers: []
	W1209 04:36:03.705213 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:36:03.705218 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:36:03.705281 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:36:03.733013 1193189 cri.go:89] found id: ""
	I1209 04:36:03.733026 1193189 logs.go:282] 0 containers: []
	W1209 04:36:03.733033 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:36:03.733038 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:36:03.733096 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:36:03.770375 1193189 cri.go:89] found id: ""
	I1209 04:36:03.770389 1193189 logs.go:282] 0 containers: []
	W1209 04:36:03.770396 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:36:03.770401 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:36:03.770457 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:36:03.793967 1193189 cri.go:89] found id: ""
	I1209 04:36:03.793980 1193189 logs.go:282] 0 containers: []
	W1209 04:36:03.793987 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:36:03.793992 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:36:03.794053 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:36:03.818652 1193189 cri.go:89] found id: ""
	I1209 04:36:03.818666 1193189 logs.go:282] 0 containers: []
	W1209 04:36:03.818672 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:36:03.818681 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:36:03.818691 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:36:03.873671 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:36:03.873692 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:36:03.890142 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:36:03.890159 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:36:03.958206 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:36:03.950384   14475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:03.950766   14475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:03.952402   14475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:03.952806   14475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:03.954365   14475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:36:03.950384   14475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:03.950766   14475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:03.952402   14475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:03.952806   14475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:03.954365   14475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:36:03.958216 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:36:03.958227 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:36:04.019401 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:36:04.019421 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:36:06.551878 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:36:06.561600 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:36:06.561657 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:36:06.585277 1193189 cri.go:89] found id: ""
	I1209 04:36:06.585291 1193189 logs.go:282] 0 containers: []
	W1209 04:36:06.585298 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:36:06.585304 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:36:06.585366 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:36:06.613401 1193189 cri.go:89] found id: ""
	I1209 04:36:06.613415 1193189 logs.go:282] 0 containers: []
	W1209 04:36:06.613421 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:36:06.613426 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:36:06.613483 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:36:06.642329 1193189 cri.go:89] found id: ""
	I1209 04:36:06.642342 1193189 logs.go:282] 0 containers: []
	W1209 04:36:06.642349 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:36:06.642354 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:36:06.642413 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:36:06.666445 1193189 cri.go:89] found id: ""
	I1209 04:36:06.666458 1193189 logs.go:282] 0 containers: []
	W1209 04:36:06.666465 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:36:06.666470 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:36:06.666527 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:36:06.695405 1193189 cri.go:89] found id: ""
	I1209 04:36:06.695419 1193189 logs.go:282] 0 containers: []
	W1209 04:36:06.695425 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:36:06.695431 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:36:06.695488 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:36:06.734331 1193189 cri.go:89] found id: ""
	I1209 04:36:06.734345 1193189 logs.go:282] 0 containers: []
	W1209 04:36:06.734361 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:36:06.734372 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:36:06.734441 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:36:06.766210 1193189 cri.go:89] found id: ""
	I1209 04:36:06.766223 1193189 logs.go:282] 0 containers: []
	W1209 04:36:06.766231 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:36:06.766238 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:36:06.766248 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:36:06.822607 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:36:06.822627 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:36:06.839326 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:36:06.839342 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:36:06.900387 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:36:06.892243   14581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:06.892630   14581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:06.894401   14581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:06.894869   14581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:06.896343   14581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:36:06.892243   14581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:06.892630   14581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:06.894401   14581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:06.894869   14581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:06.896343   14581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:36:06.900405 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:36:06.900421 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:36:06.961047 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:36:06.961067 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:36:09.488140 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:36:09.498332 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:36:09.498409 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:36:09.523347 1193189 cri.go:89] found id: ""
	I1209 04:36:09.523373 1193189 logs.go:282] 0 containers: []
	W1209 04:36:09.523380 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:36:09.523387 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:36:09.523459 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:36:09.550096 1193189 cri.go:89] found id: ""
	I1209 04:36:09.550111 1193189 logs.go:282] 0 containers: []
	W1209 04:36:09.550117 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:36:09.550123 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:36:09.550185 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:36:09.578695 1193189 cri.go:89] found id: ""
	I1209 04:36:09.578709 1193189 logs.go:282] 0 containers: []
	W1209 04:36:09.578715 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:36:09.578720 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:36:09.578784 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:36:09.607079 1193189 cri.go:89] found id: ""
	I1209 04:36:09.607093 1193189 logs.go:282] 0 containers: []
	W1209 04:36:09.607100 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:36:09.607105 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:36:09.607166 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:36:09.635495 1193189 cri.go:89] found id: ""
	I1209 04:36:09.635510 1193189 logs.go:282] 0 containers: []
	W1209 04:36:09.635516 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:36:09.635521 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:36:09.635584 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:36:09.661747 1193189 cri.go:89] found id: ""
	I1209 04:36:09.661761 1193189 logs.go:282] 0 containers: []
	W1209 04:36:09.661767 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:36:09.661773 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:36:09.661831 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:36:09.694535 1193189 cri.go:89] found id: ""
	I1209 04:36:09.694549 1193189 logs.go:282] 0 containers: []
	W1209 04:36:09.694556 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:36:09.694564 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:36:09.694574 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:36:09.759636 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:36:09.759656 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:36:09.777485 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:36:09.777502 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:36:09.841963 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:36:09.834188   14687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:09.834610   14687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:09.836196   14687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:09.836779   14687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:09.838239   14687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:36:09.834188   14687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:09.834610   14687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:09.836196   14687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:09.836779   14687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:09.838239   14687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:36:09.841974 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:36:09.841984 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:36:09.904615 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:36:09.904636 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:36:12.433539 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:36:12.443370 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:36:12.443435 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:36:12.469616 1193189 cri.go:89] found id: ""
	I1209 04:36:12.469630 1193189 logs.go:282] 0 containers: []
	W1209 04:36:12.469637 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:36:12.469643 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:36:12.469704 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:36:12.493917 1193189 cri.go:89] found id: ""
	I1209 04:36:12.493930 1193189 logs.go:282] 0 containers: []
	W1209 04:36:12.493937 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:36:12.493942 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:36:12.494001 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:36:12.518803 1193189 cri.go:89] found id: ""
	I1209 04:36:12.518817 1193189 logs.go:282] 0 containers: []
	W1209 04:36:12.518842 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:36:12.518848 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:36:12.518917 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:36:12.542764 1193189 cri.go:89] found id: ""
	I1209 04:36:12.542785 1193189 logs.go:282] 0 containers: []
	W1209 04:36:12.542792 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:36:12.542797 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:36:12.542859 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:36:12.566738 1193189 cri.go:89] found id: ""
	I1209 04:36:12.566751 1193189 logs.go:282] 0 containers: []
	W1209 04:36:12.566758 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:36:12.566762 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:36:12.566830 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:36:12.594757 1193189 cri.go:89] found id: ""
	I1209 04:36:12.594772 1193189 logs.go:282] 0 containers: []
	W1209 04:36:12.594778 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:36:12.594784 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:36:12.594850 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:36:12.619407 1193189 cri.go:89] found id: ""
	I1209 04:36:12.619421 1193189 logs.go:282] 0 containers: []
	W1209 04:36:12.619427 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:36:12.619434 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:36:12.619445 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:36:12.692974 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:36:12.683791   14780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:12.684626   14780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:12.686439   14780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:12.687100   14780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:12.688999   14780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:36:12.683791   14780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:12.684626   14780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:12.686439   14780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:12.687100   14780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:12.688999   14780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:36:12.692984 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:36:12.693001 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:36:12.766313 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:36:12.766340 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:36:12.793057 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:36:12.793075 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:36:12.849665 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:36:12.849689 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:36:15.366796 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:36:15.376649 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:36:15.376719 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:36:15.400344 1193189 cri.go:89] found id: ""
	I1209 04:36:15.400358 1193189 logs.go:282] 0 containers: []
	W1209 04:36:15.400372 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:36:15.400378 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:36:15.400437 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:36:15.425809 1193189 cri.go:89] found id: ""
	I1209 04:36:15.425822 1193189 logs.go:282] 0 containers: []
	W1209 04:36:15.425829 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:36:15.425834 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:36:15.425894 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:36:15.450444 1193189 cri.go:89] found id: ""
	I1209 04:36:15.450458 1193189 logs.go:282] 0 containers: []
	W1209 04:36:15.450466 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:36:15.450471 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:36:15.450531 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:36:15.478163 1193189 cri.go:89] found id: ""
	I1209 04:36:15.478178 1193189 logs.go:282] 0 containers: []
	W1209 04:36:15.478185 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:36:15.478190 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:36:15.478261 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:36:15.502360 1193189 cri.go:89] found id: ""
	I1209 04:36:15.502374 1193189 logs.go:282] 0 containers: []
	W1209 04:36:15.502381 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:36:15.502386 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:36:15.502450 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:36:15.530599 1193189 cri.go:89] found id: ""
	I1209 04:36:15.530614 1193189 logs.go:282] 0 containers: []
	W1209 04:36:15.530620 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:36:15.530626 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:36:15.530693 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:36:15.554654 1193189 cri.go:89] found id: ""
	I1209 04:36:15.554668 1193189 logs.go:282] 0 containers: []
	W1209 04:36:15.554675 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:36:15.554683 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:36:15.554693 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:36:15.614962 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:36:15.614982 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:36:15.641417 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:36:15.641433 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:36:15.696674 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:36:15.696692 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:36:15.714032 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:36:15.714047 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:36:15.786226 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:36:15.778061   14907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:15.778499   14907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:15.780149   14907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:15.780759   14907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:15.782381   14907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:36:15.778061   14907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:15.778499   14907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:15.780149   14907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:15.780759   14907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:15.782381   14907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:36:18.286483 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:36:18.296288 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:36:18.296346 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:36:18.323616 1193189 cri.go:89] found id: ""
	I1209 04:36:18.323629 1193189 logs.go:282] 0 containers: []
	W1209 04:36:18.323636 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:36:18.323642 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:36:18.323706 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:36:18.348203 1193189 cri.go:89] found id: ""
	I1209 04:36:18.348218 1193189 logs.go:282] 0 containers: []
	W1209 04:36:18.348225 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:36:18.348231 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:36:18.348290 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:36:18.372639 1193189 cri.go:89] found id: ""
	I1209 04:36:18.372653 1193189 logs.go:282] 0 containers: []
	W1209 04:36:18.372660 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:36:18.372671 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:36:18.372732 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:36:18.400006 1193189 cri.go:89] found id: ""
	I1209 04:36:18.400037 1193189 logs.go:282] 0 containers: []
	W1209 04:36:18.400044 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:36:18.400049 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:36:18.400120 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:36:18.424038 1193189 cri.go:89] found id: ""
	I1209 04:36:18.424053 1193189 logs.go:282] 0 containers: []
	W1209 04:36:18.424060 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:36:18.424068 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:36:18.424135 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:36:18.447692 1193189 cri.go:89] found id: ""
	I1209 04:36:18.447719 1193189 logs.go:282] 0 containers: []
	W1209 04:36:18.447726 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:36:18.447737 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:36:18.447809 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:36:18.473888 1193189 cri.go:89] found id: ""
	I1209 04:36:18.473902 1193189 logs.go:282] 0 containers: []
	W1209 04:36:18.473908 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:36:18.473916 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:36:18.473925 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:36:18.531920 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:36:18.531945 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:36:18.549523 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:36:18.549540 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:36:18.610296 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:36:18.601988   14993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:18.602374   14993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:18.603904   14993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:18.604520   14993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:18.606270   14993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:36:18.601988   14993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:18.602374   14993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:18.603904   14993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:18.604520   14993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:18.606270   14993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:36:18.610306 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:36:18.610316 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:36:18.673185 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:36:18.673204 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:36:21.215945 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:36:21.225779 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:36:21.225842 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:36:21.251614 1193189 cri.go:89] found id: ""
	I1209 04:36:21.251627 1193189 logs.go:282] 0 containers: []
	W1209 04:36:21.251633 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:36:21.251639 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:36:21.251701 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:36:21.274988 1193189 cri.go:89] found id: ""
	I1209 04:36:21.275002 1193189 logs.go:282] 0 containers: []
	W1209 04:36:21.275009 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:36:21.275016 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:36:21.275073 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:36:21.298100 1193189 cri.go:89] found id: ""
	I1209 04:36:21.298113 1193189 logs.go:282] 0 containers: []
	W1209 04:36:21.298120 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:36:21.298125 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:36:21.298188 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:36:21.323043 1193189 cri.go:89] found id: ""
	I1209 04:36:21.323057 1193189 logs.go:282] 0 containers: []
	W1209 04:36:21.323063 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:36:21.323068 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:36:21.323128 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:36:21.346629 1193189 cri.go:89] found id: ""
	I1209 04:36:21.346642 1193189 logs.go:282] 0 containers: []
	W1209 04:36:21.346649 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:36:21.346654 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:36:21.346713 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:36:21.370687 1193189 cri.go:89] found id: ""
	I1209 04:36:21.370700 1193189 logs.go:282] 0 containers: []
	W1209 04:36:21.370707 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:36:21.370712 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:36:21.370767 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:36:21.394774 1193189 cri.go:89] found id: ""
	I1209 04:36:21.394788 1193189 logs.go:282] 0 containers: []
	W1209 04:36:21.394794 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:36:21.394803 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:36:21.394813 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:36:21.458240 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:36:21.449927   15094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:21.450664   15094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:21.452537   15094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:21.452900   15094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:21.454442   15094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:36:21.449927   15094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:21.450664   15094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:21.452537   15094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:21.452900   15094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:21.454442   15094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:36:21.458249 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:36:21.458260 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:36:21.519830 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:36:21.519850 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:36:21.556076 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:36:21.556093 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:36:21.614749 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:36:21.614769 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:36:24.132222 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:36:24.143277 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:36:24.143352 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:36:24.173051 1193189 cri.go:89] found id: ""
	I1209 04:36:24.173065 1193189 logs.go:282] 0 containers: []
	W1209 04:36:24.173072 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:36:24.173077 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:36:24.173134 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:36:24.198407 1193189 cri.go:89] found id: ""
	I1209 04:36:24.198421 1193189 logs.go:282] 0 containers: []
	W1209 04:36:24.198428 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:36:24.198432 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:36:24.198490 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:36:24.224986 1193189 cri.go:89] found id: ""
	I1209 04:36:24.225000 1193189 logs.go:282] 0 containers: []
	W1209 04:36:24.225007 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:36:24.225012 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:36:24.225071 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:36:24.249942 1193189 cri.go:89] found id: ""
	I1209 04:36:24.249957 1193189 logs.go:282] 0 containers: []
	W1209 04:36:24.249964 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:36:24.249969 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:36:24.250031 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:36:24.274252 1193189 cri.go:89] found id: ""
	I1209 04:36:24.274266 1193189 logs.go:282] 0 containers: []
	W1209 04:36:24.274273 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:36:24.274278 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:36:24.274347 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:36:24.302468 1193189 cri.go:89] found id: ""
	I1209 04:36:24.302485 1193189 logs.go:282] 0 containers: []
	W1209 04:36:24.302491 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:36:24.302497 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:36:24.302582 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:36:24.328883 1193189 cri.go:89] found id: ""
	I1209 04:36:24.328898 1193189 logs.go:282] 0 containers: []
	W1209 04:36:24.328905 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:36:24.328913 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:36:24.328923 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:36:24.386082 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:36:24.386102 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:36:24.403782 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:36:24.403798 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:36:24.473588 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:36:24.462744   15202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:24.463330   15202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:24.466259   15202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:24.467650   15202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:24.468411   15202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:36:24.462744   15202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:24.463330   15202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:24.466259   15202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:24.467650   15202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:24.468411   15202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:36:24.473598 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:36:24.473609 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:36:24.534819 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:36:24.534841 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:36:27.064221 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:36:27.074260 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:36:27.074334 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:36:27.098417 1193189 cri.go:89] found id: ""
	I1209 04:36:27.098445 1193189 logs.go:282] 0 containers: []
	W1209 04:36:27.098452 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:36:27.098457 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:36:27.098527 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:36:27.126158 1193189 cri.go:89] found id: ""
	I1209 04:36:27.126172 1193189 logs.go:282] 0 containers: []
	W1209 04:36:27.126184 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:36:27.126189 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:36:27.126250 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:36:27.154258 1193189 cri.go:89] found id: ""
	I1209 04:36:27.154271 1193189 logs.go:282] 0 containers: []
	W1209 04:36:27.154278 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:36:27.154284 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:36:27.154343 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:36:27.179273 1193189 cri.go:89] found id: ""
	I1209 04:36:27.179286 1193189 logs.go:282] 0 containers: []
	W1209 04:36:27.179293 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:36:27.179309 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:36:27.179367 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:36:27.204706 1193189 cri.go:89] found id: ""
	I1209 04:36:27.204720 1193189 logs.go:282] 0 containers: []
	W1209 04:36:27.204727 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:36:27.204732 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:36:27.204791 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:36:27.230005 1193189 cri.go:89] found id: ""
	I1209 04:36:27.230019 1193189 logs.go:282] 0 containers: []
	W1209 04:36:27.230026 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:36:27.230032 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:36:27.230098 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:36:27.254482 1193189 cri.go:89] found id: ""
	I1209 04:36:27.254496 1193189 logs.go:282] 0 containers: []
	W1209 04:36:27.254512 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:36:27.254521 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:36:27.254531 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:36:27.310002 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:36:27.310022 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:36:27.327694 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:36:27.327713 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:36:27.395258 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:36:27.386987   15311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:27.387759   15311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:27.389469   15311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:27.389968   15311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:27.391467   15311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:36:27.386987   15311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:27.387759   15311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:27.389469   15311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:27.389968   15311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:27.391467   15311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:36:27.395269 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:36:27.395279 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:36:27.457675 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:36:27.457694 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:36:29.986185 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:36:30.005634 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:36:30.005711 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:36:30.038694 1193189 cri.go:89] found id: ""
	I1209 04:36:30.038709 1193189 logs.go:282] 0 containers: []
	W1209 04:36:30.038717 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:36:30.038723 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:36:30.038792 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:36:30.065088 1193189 cri.go:89] found id: ""
	I1209 04:36:30.065110 1193189 logs.go:282] 0 containers: []
	W1209 04:36:30.065119 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:36:30.065124 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:36:30.065188 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:36:30.090159 1193189 cri.go:89] found id: ""
	I1209 04:36:30.090173 1193189 logs.go:282] 0 containers: []
	W1209 04:36:30.090180 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:36:30.090185 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:36:30.090250 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:36:30.118708 1193189 cri.go:89] found id: ""
	I1209 04:36:30.118721 1193189 logs.go:282] 0 containers: []
	W1209 04:36:30.118728 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:36:30.118734 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:36:30.118796 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:36:30.146404 1193189 cri.go:89] found id: ""
	I1209 04:36:30.146417 1193189 logs.go:282] 0 containers: []
	W1209 04:36:30.146424 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:36:30.146429 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:36:30.146488 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:36:30.170089 1193189 cri.go:89] found id: ""
	I1209 04:36:30.170102 1193189 logs.go:282] 0 containers: []
	W1209 04:36:30.170109 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:36:30.170114 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:36:30.170171 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:36:30.194303 1193189 cri.go:89] found id: ""
	I1209 04:36:30.194317 1193189 logs.go:282] 0 containers: []
	W1209 04:36:30.194327 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:36:30.194334 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:36:30.194344 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:36:30.230597 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:36:30.230613 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:36:30.285894 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:36:30.285913 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:36:30.303774 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:36:30.303789 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:36:30.370275 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:36:30.361691   15431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:30.362453   15431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:30.364059   15431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:30.364598   15431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:30.366280   15431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:36:30.361691   15431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:30.362453   15431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:30.364059   15431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:30.364598   15431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:30.366280   15431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:36:30.370284 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:36:30.370297 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:36:32.932454 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:36:32.942712 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:36:32.942772 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:36:32.970393 1193189 cri.go:89] found id: ""
	I1209 04:36:32.970406 1193189 logs.go:282] 0 containers: []
	W1209 04:36:32.970413 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:36:32.970418 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:36:32.970480 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:36:33.001462 1193189 cri.go:89] found id: ""
	I1209 04:36:33.001476 1193189 logs.go:282] 0 containers: []
	W1209 04:36:33.001489 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:36:33.001495 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:36:33.001561 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:36:33.027773 1193189 cri.go:89] found id: ""
	I1209 04:36:33.027787 1193189 logs.go:282] 0 containers: []
	W1209 04:36:33.027794 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:36:33.027799 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:36:33.027858 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:36:33.054066 1193189 cri.go:89] found id: ""
	I1209 04:36:33.054080 1193189 logs.go:282] 0 containers: []
	W1209 04:36:33.054086 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:36:33.054091 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:36:33.054152 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:36:33.077043 1193189 cri.go:89] found id: ""
	I1209 04:36:33.077057 1193189 logs.go:282] 0 containers: []
	W1209 04:36:33.077064 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:36:33.077069 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:36:33.077127 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:36:33.101043 1193189 cri.go:89] found id: ""
	I1209 04:36:33.101056 1193189 logs.go:282] 0 containers: []
	W1209 04:36:33.101063 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:36:33.101068 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:36:33.101126 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:36:33.125074 1193189 cri.go:89] found id: ""
	I1209 04:36:33.125088 1193189 logs.go:282] 0 containers: []
	W1209 04:36:33.125096 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:36:33.125104 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:36:33.125115 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:36:33.181829 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:36:33.181849 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:36:33.198599 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:36:33.198616 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:36:33.259348 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:36:33.250653   15527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:33.251506   15527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:33.253061   15527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:33.253668   15527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:33.255199   15527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:36:33.250653   15527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:33.251506   15527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:33.253061   15527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:33.253668   15527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:33.255199   15527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:36:33.259358 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:36:33.259369 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:36:33.321638 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:36:33.321660 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:36:35.847785 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:36:35.857973 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:36:35.858039 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:36:35.882819 1193189 cri.go:89] found id: ""
	I1209 04:36:35.882832 1193189 logs.go:282] 0 containers: []
	W1209 04:36:35.882839 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:36:35.882844 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:36:35.882908 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:36:35.911762 1193189 cri.go:89] found id: ""
	I1209 04:36:35.911776 1193189 logs.go:282] 0 containers: []
	W1209 04:36:35.911784 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:36:35.911789 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:36:35.911849 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:36:35.946631 1193189 cri.go:89] found id: ""
	I1209 04:36:35.946646 1193189 logs.go:282] 0 containers: []
	W1209 04:36:35.946652 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:36:35.946663 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:36:35.946721 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:36:35.972345 1193189 cri.go:89] found id: ""
	I1209 04:36:35.972360 1193189 logs.go:282] 0 containers: []
	W1209 04:36:35.972367 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:36:35.972372 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:36:35.972438 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:36:36.010844 1193189 cri.go:89] found id: ""
	I1209 04:36:36.010859 1193189 logs.go:282] 0 containers: []
	W1209 04:36:36.010867 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:36:36.010876 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:36:36.010940 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:36:36.036297 1193189 cri.go:89] found id: ""
	I1209 04:36:36.036310 1193189 logs.go:282] 0 containers: []
	W1209 04:36:36.036317 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:36:36.036323 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:36:36.036387 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:36:36.066383 1193189 cri.go:89] found id: ""
	I1209 04:36:36.066398 1193189 logs.go:282] 0 containers: []
	W1209 04:36:36.066404 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:36:36.066412 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:36:36.066422 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:36:36.123320 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:36:36.123340 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:36:36.141674 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:36:36.141691 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:36:36.207738 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:36:36.198534   15631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:36.199238   15631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:36.201129   15631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:36.201829   15631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:36.203559   15631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:36:36.198534   15631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:36.199238   15631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:36.201129   15631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:36.201829   15631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:36.203559   15631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:36:36.207749 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:36:36.207760 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:36:36.271530 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:36:36.271553 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:36:38.808031 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:36:38.818384 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:36:38.818445 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:36:38.842672 1193189 cri.go:89] found id: ""
	I1209 04:36:38.842686 1193189 logs.go:282] 0 containers: []
	W1209 04:36:38.842692 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:36:38.842697 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:36:38.842757 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:36:38.867351 1193189 cri.go:89] found id: ""
	I1209 04:36:38.867365 1193189 logs.go:282] 0 containers: []
	W1209 04:36:38.867371 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:36:38.867376 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:36:38.867436 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:36:38.891443 1193189 cri.go:89] found id: ""
	I1209 04:36:38.891456 1193189 logs.go:282] 0 containers: []
	W1209 04:36:38.891463 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:36:38.891469 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:36:38.891530 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:36:38.916345 1193189 cri.go:89] found id: ""
	I1209 04:36:38.916359 1193189 logs.go:282] 0 containers: []
	W1209 04:36:38.916366 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:36:38.916371 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:36:38.916435 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:36:38.949316 1193189 cri.go:89] found id: ""
	I1209 04:36:38.949330 1193189 logs.go:282] 0 containers: []
	W1209 04:36:38.949348 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:36:38.949354 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:36:38.949427 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:36:38.983440 1193189 cri.go:89] found id: ""
	I1209 04:36:38.983453 1193189 logs.go:282] 0 containers: []
	W1209 04:36:38.983472 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:36:38.983479 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:36:38.983548 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:36:39.016431 1193189 cri.go:89] found id: ""
	I1209 04:36:39.016445 1193189 logs.go:282] 0 containers: []
	W1209 04:36:39.016452 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:36:39.016460 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:36:39.016470 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:36:39.072919 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:36:39.072940 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:36:39.091632 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:36:39.091649 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:36:39.155205 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:36:39.147101   15734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:39.147530   15734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:39.149195   15734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:39.149594   15734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:39.151291   15734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:36:39.147101   15734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:39.147530   15734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:39.149195   15734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:39.149594   15734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:39.151291   15734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:36:39.155215 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:36:39.155237 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:36:39.217334 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:36:39.217354 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:36:41.745095 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:36:41.755765 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:36:41.755830 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:36:41.788789 1193189 cri.go:89] found id: ""
	I1209 04:36:41.788815 1193189 logs.go:282] 0 containers: []
	W1209 04:36:41.788821 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:36:41.788827 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:36:41.788905 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:36:41.818341 1193189 cri.go:89] found id: ""
	I1209 04:36:41.818363 1193189 logs.go:282] 0 containers: []
	W1209 04:36:41.818371 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:36:41.818376 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:36:41.818443 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:36:41.847734 1193189 cri.go:89] found id: ""
	I1209 04:36:41.847748 1193189 logs.go:282] 0 containers: []
	W1209 04:36:41.847754 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:36:41.847768 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:36:41.847827 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:36:41.871920 1193189 cri.go:89] found id: ""
	I1209 04:36:41.871943 1193189 logs.go:282] 0 containers: []
	W1209 04:36:41.871950 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:36:41.871955 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:36:41.872035 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:36:41.897849 1193189 cri.go:89] found id: ""
	I1209 04:36:41.897863 1193189 logs.go:282] 0 containers: []
	W1209 04:36:41.897870 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:36:41.897875 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:36:41.897936 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:36:41.923060 1193189 cri.go:89] found id: ""
	I1209 04:36:41.923083 1193189 logs.go:282] 0 containers: []
	W1209 04:36:41.923090 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:36:41.923096 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:36:41.923163 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:36:41.952660 1193189 cri.go:89] found id: ""
	I1209 04:36:41.952684 1193189 logs.go:282] 0 containers: []
	W1209 04:36:41.952692 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:36:41.952699 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:36:41.952709 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:36:42.023725 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:36:42.023763 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:36:42.042594 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:36:42.042613 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:36:42.123707 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:36:42.110165   15835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:42.110742   15835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:42.112376   15835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:42.113625   15835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:42.114588   15835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:36:42.110165   15835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:42.110742   15835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:42.112376   15835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:42.113625   15835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:42.114588   15835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:36:42.123742 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:36:42.123763 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:36:42.205354 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:36:42.205378 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:36:44.742230 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:36:44.752061 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:36:44.752130 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:36:44.777546 1193189 cri.go:89] found id: ""
	I1209 04:36:44.777560 1193189 logs.go:282] 0 containers: []
	W1209 04:36:44.777567 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:36:44.777573 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:36:44.777640 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:36:44.800656 1193189 cri.go:89] found id: ""
	I1209 04:36:44.800670 1193189 logs.go:282] 0 containers: []
	W1209 04:36:44.800677 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:36:44.800681 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:36:44.800746 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:36:44.823629 1193189 cri.go:89] found id: ""
	I1209 04:36:44.823643 1193189 logs.go:282] 0 containers: []
	W1209 04:36:44.823649 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:36:44.823654 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:36:44.823710 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:36:44.847779 1193189 cri.go:89] found id: ""
	I1209 04:36:44.847792 1193189 logs.go:282] 0 containers: []
	W1209 04:36:44.847799 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:36:44.847804 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:36:44.847864 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:36:44.871420 1193189 cri.go:89] found id: ""
	I1209 04:36:44.871434 1193189 logs.go:282] 0 containers: []
	W1209 04:36:44.871441 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:36:44.871446 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:36:44.871502 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:36:44.897429 1193189 cri.go:89] found id: ""
	I1209 04:36:44.897443 1193189 logs.go:282] 0 containers: []
	W1209 04:36:44.897450 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:36:44.897455 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:36:44.897515 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:36:44.921002 1193189 cri.go:89] found id: ""
	I1209 04:36:44.921016 1193189 logs.go:282] 0 containers: []
	W1209 04:36:44.921023 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:36:44.921030 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:36:44.921050 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:36:44.943906 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:36:44.943923 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:36:45.040267 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:36:45.023556   15936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:45.024395   15936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:45.026904   15936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:45.028346   15936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:45.029304   15936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:36:45.023556   15936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:45.024395   15936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:45.026904   15936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:45.028346   15936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:45.029304   15936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:36:45.040278 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:36:45.040290 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:36:45.111615 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:36:45.111641 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:36:45.154764 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:36:45.154783 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:36:47.737899 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:36:47.748114 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:36:47.748183 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:36:47.772307 1193189 cri.go:89] found id: ""
	I1209 04:36:47.772321 1193189 logs.go:282] 0 containers: []
	W1209 04:36:47.772327 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:36:47.772333 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:36:47.772392 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:36:47.796250 1193189 cri.go:89] found id: ""
	I1209 04:36:47.796264 1193189 logs.go:282] 0 containers: []
	W1209 04:36:47.796271 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:36:47.796276 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:36:47.796337 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:36:47.820196 1193189 cri.go:89] found id: ""
	I1209 04:36:47.820209 1193189 logs.go:282] 0 containers: []
	W1209 04:36:47.820217 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:36:47.820222 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:36:47.820279 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:36:47.844179 1193189 cri.go:89] found id: ""
	I1209 04:36:47.844193 1193189 logs.go:282] 0 containers: []
	W1209 04:36:47.844200 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:36:47.844205 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:36:47.844261 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:36:47.871664 1193189 cri.go:89] found id: ""
	I1209 04:36:47.871678 1193189 logs.go:282] 0 containers: []
	W1209 04:36:47.871685 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:36:47.871689 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:36:47.871746 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:36:47.897882 1193189 cri.go:89] found id: ""
	I1209 04:36:47.897896 1193189 logs.go:282] 0 containers: []
	W1209 04:36:47.897902 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:36:47.897907 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:36:47.897968 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:36:47.925663 1193189 cri.go:89] found id: ""
	I1209 04:36:47.925678 1193189 logs.go:282] 0 containers: []
	W1209 04:36:47.925684 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:36:47.925692 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:36:47.925702 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:36:47.982430 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:36:47.982448 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:36:48.003029 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:36:48.003046 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:36:48.081084 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:36:48.072200   16045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:48.073024   16045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:48.074652   16045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:48.075018   16045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:48.076580   16045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:36:48.072200   16045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:48.073024   16045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:48.074652   16045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:48.075018   16045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:48.076580   16045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:36:48.081095 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:36:48.081114 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:36:48.144865 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:36:48.144883 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:36:50.676655 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:36:50.687887 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:36:50.687948 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:36:50.712477 1193189 cri.go:89] found id: ""
	I1209 04:36:50.712492 1193189 logs.go:282] 0 containers: []
	W1209 04:36:50.712498 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:36:50.712504 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:36:50.712560 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:36:50.743459 1193189 cri.go:89] found id: ""
	I1209 04:36:50.743472 1193189 logs.go:282] 0 containers: []
	W1209 04:36:50.743479 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:36:50.743484 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:36:50.743559 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:36:50.769066 1193189 cri.go:89] found id: ""
	I1209 04:36:50.769080 1193189 logs.go:282] 0 containers: []
	W1209 04:36:50.769087 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:36:50.769093 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:36:50.769149 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:36:50.792910 1193189 cri.go:89] found id: ""
	I1209 04:36:50.792924 1193189 logs.go:282] 0 containers: []
	W1209 04:36:50.792931 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:36:50.792942 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:36:50.793002 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:36:50.817006 1193189 cri.go:89] found id: ""
	I1209 04:36:50.817020 1193189 logs.go:282] 0 containers: []
	W1209 04:36:50.817027 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:36:50.817033 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:36:50.817108 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:36:50.840981 1193189 cri.go:89] found id: ""
	I1209 04:36:50.840995 1193189 logs.go:282] 0 containers: []
	W1209 04:36:50.841002 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:36:50.841007 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:36:50.841065 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:36:50.864484 1193189 cri.go:89] found id: ""
	I1209 04:36:50.864498 1193189 logs.go:282] 0 containers: []
	W1209 04:36:50.864504 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:36:50.864512 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:36:50.864522 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:36:50.934409 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:36:50.923680   16138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:50.924264   16138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:50.925919   16138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:50.926350   16138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:50.927812   16138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:36:50.923680   16138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:50.924264   16138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:50.925919   16138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:50.926350   16138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:50.927812   16138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:36:50.934428 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:36:50.934439 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:36:51.007145 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:36:51.007168 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:36:51.035885 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:36:51.035901 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:36:51.094880 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:36:51.094903 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:36:53.613358 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:36:53.623300 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:36:53.623360 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:36:53.649605 1193189 cri.go:89] found id: ""
	I1209 04:36:53.649619 1193189 logs.go:282] 0 containers: []
	W1209 04:36:53.649625 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:36:53.649630 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:36:53.649688 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:36:53.673756 1193189 cri.go:89] found id: ""
	I1209 04:36:53.673771 1193189 logs.go:282] 0 containers: []
	W1209 04:36:53.673777 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:36:53.673782 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:36:53.673841 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:36:53.697312 1193189 cri.go:89] found id: ""
	I1209 04:36:53.697326 1193189 logs.go:282] 0 containers: []
	W1209 04:36:53.697333 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:36:53.697339 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:36:53.697405 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:36:53.721559 1193189 cri.go:89] found id: ""
	I1209 04:36:53.721573 1193189 logs.go:282] 0 containers: []
	W1209 04:36:53.721580 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:36:53.721585 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:36:53.721643 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:36:53.745640 1193189 cri.go:89] found id: ""
	I1209 04:36:53.745654 1193189 logs.go:282] 0 containers: []
	W1209 04:36:53.745661 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:36:53.745666 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:36:53.745724 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:36:53.770072 1193189 cri.go:89] found id: ""
	I1209 04:36:53.770086 1193189 logs.go:282] 0 containers: []
	W1209 04:36:53.770093 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:36:53.770099 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:36:53.770161 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:36:53.793834 1193189 cri.go:89] found id: ""
	I1209 04:36:53.793848 1193189 logs.go:282] 0 containers: []
	W1209 04:36:53.793856 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:36:53.793864 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:36:53.793873 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:36:53.853273 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:36:53.853293 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:36:53.870522 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:36:53.870539 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:36:53.937367 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:36:53.928497   16249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:53.929009   16249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:53.930701   16249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:53.931304   16249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:53.932870   16249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:36:53.928497   16249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:53.929009   16249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:53.930701   16249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:53.931304   16249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:53.932870   16249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:36:53.937377 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:36:53.937387 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:36:54.005219 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:36:54.005240 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:36:56.538809 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:36:56.548679 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:36:56.548738 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:36:56.572505 1193189 cri.go:89] found id: ""
	I1209 04:36:56.572519 1193189 logs.go:282] 0 containers: []
	W1209 04:36:56.572526 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:36:56.572531 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:36:56.572591 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:36:56.596732 1193189 cri.go:89] found id: ""
	I1209 04:36:56.596746 1193189 logs.go:282] 0 containers: []
	W1209 04:36:56.596753 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:36:56.596758 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:36:56.596817 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:36:56.622042 1193189 cri.go:89] found id: ""
	I1209 04:36:56.622056 1193189 logs.go:282] 0 containers: []
	W1209 04:36:56.622063 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:36:56.622068 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:36:56.622125 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:36:56.644865 1193189 cri.go:89] found id: ""
	I1209 04:36:56.644879 1193189 logs.go:282] 0 containers: []
	W1209 04:36:56.644885 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:36:56.644890 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:36:56.644947 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:36:56.670230 1193189 cri.go:89] found id: ""
	I1209 04:36:56.670244 1193189 logs.go:282] 0 containers: []
	W1209 04:36:56.670252 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:36:56.670257 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:36:56.670314 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:36:56.697566 1193189 cri.go:89] found id: ""
	I1209 04:36:56.697580 1193189 logs.go:282] 0 containers: []
	W1209 04:36:56.697586 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:36:56.697592 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:36:56.697650 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:36:56.726250 1193189 cri.go:89] found id: ""
	I1209 04:36:56.726264 1193189 logs.go:282] 0 containers: []
	W1209 04:36:56.726270 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:36:56.726278 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:36:56.726287 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:36:56.789536 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:36:56.789556 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:36:56.818317 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:36:56.818332 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:36:56.874653 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:36:56.874671 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:36:56.892967 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:36:56.892987 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:36:56.969870 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:36:56.961196   16367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:56.962227   16367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:56.964000   16367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:56.964364   16367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:56.965851   16367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:36:56.961196   16367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:56.962227   16367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:56.964000   16367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:56.964364   16367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:56.965851   16367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:36:59.470133 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:36:59.480193 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:36:59.480253 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:36:59.505288 1193189 cri.go:89] found id: ""
	I1209 04:36:59.505301 1193189 logs.go:282] 0 containers: []
	W1209 04:36:59.505308 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:36:59.505314 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:36:59.505375 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:36:59.530093 1193189 cri.go:89] found id: ""
	I1209 04:36:59.530108 1193189 logs.go:282] 0 containers: []
	W1209 04:36:59.530114 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:36:59.530120 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:36:59.530180 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:36:59.558857 1193189 cri.go:89] found id: ""
	I1209 04:36:59.558870 1193189 logs.go:282] 0 containers: []
	W1209 04:36:59.558877 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:36:59.558882 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:36:59.558939 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:36:59.587253 1193189 cri.go:89] found id: ""
	I1209 04:36:59.587267 1193189 logs.go:282] 0 containers: []
	W1209 04:36:59.587273 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:36:59.587278 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:36:59.587334 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:36:59.615574 1193189 cri.go:89] found id: ""
	I1209 04:36:59.615587 1193189 logs.go:282] 0 containers: []
	W1209 04:36:59.615594 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:36:59.615599 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:36:59.615661 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:36:59.640949 1193189 cri.go:89] found id: ""
	I1209 04:36:59.640963 1193189 logs.go:282] 0 containers: []
	W1209 04:36:59.640969 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:36:59.640975 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:36:59.641036 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:36:59.669059 1193189 cri.go:89] found id: ""
	I1209 04:36:59.669073 1193189 logs.go:282] 0 containers: []
	W1209 04:36:59.669079 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:36:59.669087 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:36:59.669099 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:36:59.728975 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:36:59.728993 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:36:59.746224 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:36:59.746240 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:36:59.811892 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:36:59.803565   16459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:59.804329   16459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:59.805884   16459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:59.806435   16459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:59.808154   16459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:36:59.803565   16459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:59.804329   16459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:59.805884   16459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:59.806435   16459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:59.808154   16459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:36:59.811908 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:36:59.811919 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:36:59.874287 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:36:59.874310 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:37:02.402643 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:37:02.413719 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:37:02.413785 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:37:02.440871 1193189 cri.go:89] found id: ""
	I1209 04:37:02.440885 1193189 logs.go:282] 0 containers: []
	W1209 04:37:02.440892 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:37:02.440897 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:37:02.440962 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:37:02.466112 1193189 cri.go:89] found id: ""
	I1209 04:37:02.466125 1193189 logs.go:282] 0 containers: []
	W1209 04:37:02.466132 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:37:02.466137 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:37:02.466195 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:37:02.491412 1193189 cri.go:89] found id: ""
	I1209 04:37:02.491426 1193189 logs.go:282] 0 containers: []
	W1209 04:37:02.491433 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:37:02.491438 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:37:02.491495 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:37:02.519036 1193189 cri.go:89] found id: ""
	I1209 04:37:02.519051 1193189 logs.go:282] 0 containers: []
	W1209 04:37:02.519058 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:37:02.519063 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:37:02.519126 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:37:02.547912 1193189 cri.go:89] found id: ""
	I1209 04:37:02.547927 1193189 logs.go:282] 0 containers: []
	W1209 04:37:02.547934 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:37:02.547939 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:37:02.548000 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:37:02.574804 1193189 cri.go:89] found id: ""
	I1209 04:37:02.574818 1193189 logs.go:282] 0 containers: []
	W1209 04:37:02.574826 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:37:02.574832 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:37:02.574910 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:37:02.598953 1193189 cri.go:89] found id: ""
	I1209 04:37:02.598967 1193189 logs.go:282] 0 containers: []
	W1209 04:37:02.598973 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:37:02.598981 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:37:02.598994 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:37:02.661273 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:37:02.661293 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:37:02.692376 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:37:02.692392 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:37:02.750097 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:37:02.750116 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:37:02.768673 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:37:02.768691 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:37:02.831464 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:37:02.822705   16576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:02.823490   16576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:02.825015   16576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:02.825561   16576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:02.827104   16576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:37:02.822705   16576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:02.823490   16576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:02.825015   16576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:02.825561   16576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:02.827104   16576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:37:05.331744 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:37:05.341534 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:37:05.341596 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:37:05.366255 1193189 cri.go:89] found id: ""
	I1209 04:37:05.366268 1193189 logs.go:282] 0 containers: []
	W1209 04:37:05.366275 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:37:05.366280 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:37:05.366339 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:37:05.391184 1193189 cri.go:89] found id: ""
	I1209 04:37:05.391198 1193189 logs.go:282] 0 containers: []
	W1209 04:37:05.391204 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:37:05.391211 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:37:05.391273 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:37:05.418240 1193189 cri.go:89] found id: ""
	I1209 04:37:05.418253 1193189 logs.go:282] 0 containers: []
	W1209 04:37:05.418259 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:37:05.418264 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:37:05.418327 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:37:05.442720 1193189 cri.go:89] found id: ""
	I1209 04:37:05.442734 1193189 logs.go:282] 0 containers: []
	W1209 04:37:05.442740 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:37:05.442746 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:37:05.442809 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:37:05.467915 1193189 cri.go:89] found id: ""
	I1209 04:37:05.467930 1193189 logs.go:282] 0 containers: []
	W1209 04:37:05.467937 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:37:05.467942 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:37:05.468009 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:37:05.491304 1193189 cri.go:89] found id: ""
	I1209 04:37:05.491318 1193189 logs.go:282] 0 containers: []
	W1209 04:37:05.491325 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:37:05.491330 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:37:05.491388 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:37:05.520597 1193189 cri.go:89] found id: ""
	I1209 04:37:05.520616 1193189 logs.go:282] 0 containers: []
	W1209 04:37:05.520623 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:37:05.520631 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:37:05.520642 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:37:05.577158 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:37:05.577177 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:37:05.593604 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:37:05.593620 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:37:05.661751 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:37:05.653767   16667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:05.654429   16667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:05.656081   16667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:05.656695   16667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:05.658088   16667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:37:05.653767   16667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:05.654429   16667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:05.656081   16667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:05.656695   16667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:05.658088   16667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:37:05.661761 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:37:05.661771 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:37:05.729846 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:37:05.729866 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:37:08.257598 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:37:08.267457 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:37:08.267520 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:37:08.295093 1193189 cri.go:89] found id: ""
	I1209 04:37:08.295107 1193189 logs.go:282] 0 containers: []
	W1209 04:37:08.295114 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:37:08.295119 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:37:08.295181 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:37:08.320140 1193189 cri.go:89] found id: ""
	I1209 04:37:08.320153 1193189 logs.go:282] 0 containers: []
	W1209 04:37:08.320160 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:37:08.320165 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:37:08.320233 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:37:08.344055 1193189 cri.go:89] found id: ""
	I1209 04:37:08.344069 1193189 logs.go:282] 0 containers: []
	W1209 04:37:08.344075 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:37:08.344081 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:37:08.344141 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:37:08.372791 1193189 cri.go:89] found id: ""
	I1209 04:37:08.372805 1193189 logs.go:282] 0 containers: []
	W1209 04:37:08.372811 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:37:08.372816 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:37:08.372874 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:37:08.396162 1193189 cri.go:89] found id: ""
	I1209 04:37:08.396175 1193189 logs.go:282] 0 containers: []
	W1209 04:37:08.396182 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:37:08.396187 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:37:08.396245 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:37:08.420733 1193189 cri.go:89] found id: ""
	I1209 04:37:08.420747 1193189 logs.go:282] 0 containers: []
	W1209 04:37:08.420755 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:37:08.420769 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:37:08.420830 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:37:08.444879 1193189 cri.go:89] found id: ""
	I1209 04:37:08.444894 1193189 logs.go:282] 0 containers: []
	W1209 04:37:08.444900 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:37:08.444918 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:37:08.444929 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:37:08.508132 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:37:08.499420   16764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:08.499882   16764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:08.501619   16764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:08.502150   16764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:08.503673   16764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:37:08.499420   16764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:08.499882   16764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:08.501619   16764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:08.502150   16764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:08.503673   16764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:37:08.508143 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:37:08.508156 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:37:08.570875 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:37:08.570900 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:37:08.602018 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:37:08.602034 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:37:08.663156 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:37:08.663174 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:37:11.180415 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:37:11.191088 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:37:11.191148 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:37:11.218679 1193189 cri.go:89] found id: ""
	I1209 04:37:11.218696 1193189 logs.go:282] 0 containers: []
	W1209 04:37:11.218703 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:37:11.218708 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:37:11.218766 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:37:11.253810 1193189 cri.go:89] found id: ""
	I1209 04:37:11.253842 1193189 logs.go:282] 0 containers: []
	W1209 04:37:11.253849 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:37:11.253855 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:37:11.253925 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:37:11.279585 1193189 cri.go:89] found id: ""
	I1209 04:37:11.279599 1193189 logs.go:282] 0 containers: []
	W1209 04:37:11.279605 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:37:11.279610 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:37:11.279668 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:37:11.303733 1193189 cri.go:89] found id: ""
	I1209 04:37:11.303747 1193189 logs.go:282] 0 containers: []
	W1209 04:37:11.303754 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:37:11.303759 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:37:11.303818 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:37:11.328678 1193189 cri.go:89] found id: ""
	I1209 04:37:11.328692 1193189 logs.go:282] 0 containers: []
	W1209 04:37:11.328699 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:37:11.328710 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:37:11.328768 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:37:11.352807 1193189 cri.go:89] found id: ""
	I1209 04:37:11.352830 1193189 logs.go:282] 0 containers: []
	W1209 04:37:11.352838 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:37:11.352843 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:37:11.352904 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:37:11.380926 1193189 cri.go:89] found id: ""
	I1209 04:37:11.380940 1193189 logs.go:282] 0 containers: []
	W1209 04:37:11.380946 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:37:11.380954 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:37:11.380964 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:37:11.443730 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:37:11.443751 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:37:11.471147 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:37:11.471163 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:37:11.528045 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:37:11.528068 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:37:11.545822 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:37:11.545839 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:37:11.612652 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:37:11.604231   16889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:11.604891   16889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:11.606570   16889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:11.607169   16889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:11.608878   16889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:37:11.604231   16889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:11.604891   16889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:11.606570   16889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:11.607169   16889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:11.608878   16889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:37:14.112937 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:37:14.123734 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:37:14.123791 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:37:14.149868 1193189 cri.go:89] found id: ""
	I1209 04:37:14.149884 1193189 logs.go:282] 0 containers: []
	W1209 04:37:14.149891 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:37:14.149897 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:37:14.149957 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:37:14.175575 1193189 cri.go:89] found id: ""
	I1209 04:37:14.175589 1193189 logs.go:282] 0 containers: []
	W1209 04:37:14.175595 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:37:14.175601 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:37:14.175665 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:37:14.202589 1193189 cri.go:89] found id: ""
	I1209 04:37:14.202615 1193189 logs.go:282] 0 containers: []
	W1209 04:37:14.202621 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:37:14.202627 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:37:14.202707 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:37:14.229085 1193189 cri.go:89] found id: ""
	I1209 04:37:14.229099 1193189 logs.go:282] 0 containers: []
	W1209 04:37:14.229109 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:37:14.229117 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:37:14.229183 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:37:14.254508 1193189 cri.go:89] found id: ""
	I1209 04:37:14.254522 1193189 logs.go:282] 0 containers: []
	W1209 04:37:14.254529 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:37:14.254534 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:37:14.254626 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:37:14.282967 1193189 cri.go:89] found id: ""
	I1209 04:37:14.282990 1193189 logs.go:282] 0 containers: []
	W1209 04:37:14.282997 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:37:14.283003 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:37:14.283072 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:37:14.307959 1193189 cri.go:89] found id: ""
	I1209 04:37:14.307973 1193189 logs.go:282] 0 containers: []
	W1209 04:37:14.307980 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:37:14.307988 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:37:14.307998 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:37:14.337297 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:37:14.337312 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:37:14.393504 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:37:14.393523 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:37:14.411720 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:37:14.411736 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:37:14.476754 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:37:14.469112   16989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:14.469506   16989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:14.470955   16989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:14.471259   16989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:14.472758   16989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:37:14.469112   16989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:14.469506   16989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:14.470955   16989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:14.471259   16989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:14.472758   16989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:37:14.476764 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:37:14.476775 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:37:17.039773 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:37:17.050019 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:37:17.050078 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:37:17.074811 1193189 cri.go:89] found id: ""
	I1209 04:37:17.074825 1193189 logs.go:282] 0 containers: []
	W1209 04:37:17.074841 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:37:17.074847 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:37:17.074928 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:37:17.098749 1193189 cri.go:89] found id: ""
	I1209 04:37:17.098763 1193189 logs.go:282] 0 containers: []
	W1209 04:37:17.098779 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:37:17.098784 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:37:17.098851 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:37:17.123314 1193189 cri.go:89] found id: ""
	I1209 04:37:17.123328 1193189 logs.go:282] 0 containers: []
	W1209 04:37:17.123334 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:37:17.123348 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:37:17.123404 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:37:17.148281 1193189 cri.go:89] found id: ""
	I1209 04:37:17.148304 1193189 logs.go:282] 0 containers: []
	W1209 04:37:17.148314 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:37:17.148319 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:37:17.148386 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:37:17.178459 1193189 cri.go:89] found id: ""
	I1209 04:37:17.178473 1193189 logs.go:282] 0 containers: []
	W1209 04:37:17.178480 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:37:17.178487 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:37:17.178545 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:37:17.214370 1193189 cri.go:89] found id: ""
	I1209 04:37:17.214383 1193189 logs.go:282] 0 containers: []
	W1209 04:37:17.214390 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:37:17.214395 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:37:17.214455 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:37:17.241547 1193189 cri.go:89] found id: ""
	I1209 04:37:17.241560 1193189 logs.go:282] 0 containers: []
	W1209 04:37:17.241567 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:37:17.241574 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:37:17.241584 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:37:17.300902 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:37:17.300920 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:37:17.318244 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:37:17.318260 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:37:17.379838 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:37:17.371574   17083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:17.372258   17083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:17.373943   17083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:17.374513   17083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:17.376103   17083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:37:17.371574   17083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:17.372258   17083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:17.373943   17083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:17.374513   17083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:17.376103   17083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:37:17.379865 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:37:17.379875 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:37:17.442204 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:37:17.442227 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:37:19.972933 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:37:19.982835 1193189 kubeadm.go:602] duration metric: took 4m3.833613801s to restartPrimaryControlPlane
	W1209 04:37:19.982896 1193189 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1209 04:37:19.982967 1193189 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1209 04:37:20.394224 1193189 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 04:37:20.407222 1193189 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 04:37:20.415043 1193189 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1209 04:37:20.415096 1193189 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 04:37:20.422447 1193189 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 04:37:20.422458 1193189 kubeadm.go:158] found existing configuration files:
	
	I1209 04:37:20.422511 1193189 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1209 04:37:20.429958 1193189 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 04:37:20.430020 1193189 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 04:37:20.437087 1193189 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1209 04:37:20.444177 1193189 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 04:37:20.444229 1193189 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 04:37:20.451583 1193189 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1209 04:37:20.459107 1193189 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 04:37:20.459158 1193189 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 04:37:20.466013 1193189 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1209 04:37:20.473265 1193189 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 04:37:20.473320 1193189 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 04:37:20.480362 1193189 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1209 04:37:20.591599 1193189 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1209 04:37:20.592032 1193189 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1209 04:37:20.651935 1193189 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1209 04:41:22.764150 1193189 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1209 04:41:22.764175 1193189 kubeadm.go:319] 
	I1209 04:41:22.764241 1193189 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1209 04:41:22.768309 1193189 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1209 04:41:22.768359 1193189 kubeadm.go:319] [preflight] Running pre-flight checks
	I1209 04:41:22.768442 1193189 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1209 04:41:22.768497 1193189 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1209 04:41:22.768531 1193189 kubeadm.go:319] OS: Linux
	I1209 04:41:22.768594 1193189 kubeadm.go:319] CGROUPS_CPU: enabled
	I1209 04:41:22.768653 1193189 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1209 04:41:22.768699 1193189 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1209 04:41:22.768746 1193189 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1209 04:41:22.768792 1193189 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1209 04:41:22.768840 1193189 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1209 04:41:22.768883 1193189 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1209 04:41:22.768930 1193189 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1209 04:41:22.768975 1193189 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1209 04:41:22.769046 1193189 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1209 04:41:22.769140 1193189 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1209 04:41:22.769229 1193189 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1209 04:41:22.769290 1193189 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1209 04:41:22.772269 1193189 out.go:252]   - Generating certificates and keys ...
	I1209 04:41:22.772365 1193189 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1209 04:41:22.772442 1193189 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1209 04:41:22.772517 1193189 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1209 04:41:22.772582 1193189 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1209 04:41:22.772651 1193189 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1209 04:41:22.772740 1193189 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1209 04:41:22.772808 1193189 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1209 04:41:22.772883 1193189 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1209 04:41:22.772975 1193189 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1209 04:41:22.773069 1193189 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1209 04:41:22.773105 1193189 kubeadm.go:319] [certs] Using the existing "sa" key
	I1209 04:41:22.773160 1193189 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1209 04:41:22.773215 1193189 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1209 04:41:22.773279 1193189 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1209 04:41:22.773333 1193189 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1209 04:41:22.773401 1193189 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1209 04:41:22.773459 1193189 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1209 04:41:22.773544 1193189 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1209 04:41:22.773604 1193189 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1209 04:41:22.778452 1193189 out.go:252]   - Booting up control plane ...
	I1209 04:41:22.778558 1193189 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1209 04:41:22.778636 1193189 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1209 04:41:22.778708 1193189 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1209 04:41:22.778830 1193189 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1209 04:41:22.778931 1193189 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1209 04:41:22.779034 1193189 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1209 04:41:22.779165 1193189 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1209 04:41:22.779213 1193189 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1209 04:41:22.779347 1193189 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1209 04:41:22.779447 1193189 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1209 04:41:22.779507 1193189 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001187798s
	I1209 04:41:22.779509 1193189 kubeadm.go:319] 
	I1209 04:41:22.779562 1193189 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1209 04:41:22.779605 1193189 kubeadm.go:319] 	- The kubelet is not running
	I1209 04:41:22.779728 1193189 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1209 04:41:22.779731 1193189 kubeadm.go:319] 
	I1209 04:41:22.779842 1193189 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1209 04:41:22.779891 1193189 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1209 04:41:22.779919 1193189 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1209 04:41:22.779932 1193189 kubeadm.go:319] 
	W1209 04:41:22.780053 1193189 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001187798s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1209 04:41:22.780164 1193189 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1209 04:41:23.192047 1193189 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 04:41:23.205020 1193189 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1209 04:41:23.205076 1193189 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 04:41:23.212555 1193189 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 04:41:23.212563 1193189 kubeadm.go:158] found existing configuration files:
	
	I1209 04:41:23.212616 1193189 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1209 04:41:23.220135 1193189 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 04:41:23.220190 1193189 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 04:41:23.227342 1193189 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1209 04:41:23.234934 1193189 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 04:41:23.234988 1193189 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 04:41:23.242413 1193189 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1209 04:41:23.249859 1193189 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 04:41:23.249916 1193189 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 04:41:23.257497 1193189 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1209 04:41:23.264938 1193189 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 04:41:23.264993 1193189 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 04:41:23.272287 1193189 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1209 04:41:23.315971 1193189 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1209 04:41:23.316329 1193189 kubeadm.go:319] [preflight] Running pre-flight checks
	I1209 04:41:23.386479 1193189 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1209 04:41:23.386543 1193189 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1209 04:41:23.386577 1193189 kubeadm.go:319] OS: Linux
	I1209 04:41:23.386622 1193189 kubeadm.go:319] CGROUPS_CPU: enabled
	I1209 04:41:23.386669 1193189 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1209 04:41:23.386716 1193189 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1209 04:41:23.386763 1193189 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1209 04:41:23.386810 1193189 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1209 04:41:23.386857 1193189 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1209 04:41:23.386901 1193189 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1209 04:41:23.386948 1193189 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1209 04:41:23.386993 1193189 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1209 04:41:23.459528 1193189 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1209 04:41:23.459630 1193189 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1209 04:41:23.459719 1193189 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1209 04:41:23.465017 1193189 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1209 04:41:23.470401 1193189 out.go:252]   - Generating certificates and keys ...
	I1209 04:41:23.470490 1193189 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1209 04:41:23.470556 1193189 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1209 04:41:23.470655 1193189 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1209 04:41:23.470730 1193189 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1209 04:41:23.470799 1193189 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1209 04:41:23.470852 1193189 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1209 04:41:23.470919 1193189 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1209 04:41:23.470980 1193189 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1209 04:41:23.471052 1193189 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1209 04:41:23.471123 1193189 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1209 04:41:23.471160 1193189 kubeadm.go:319] [certs] Using the existing "sa" key
	I1209 04:41:23.471222 1193189 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1209 04:41:23.897547 1193189 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1209 04:41:24.071180 1193189 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1209 04:41:24.419266 1193189 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1209 04:41:24.580042 1193189 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1209 04:41:25.012112 1193189 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1209 04:41:25.012658 1193189 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1209 04:41:25.015310 1193189 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1209 04:41:25.018776 1193189 out.go:252]   - Booting up control plane ...
	I1209 04:41:25.018875 1193189 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1209 04:41:25.018952 1193189 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1209 04:41:25.019019 1193189 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1209 04:41:25.039820 1193189 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1209 04:41:25.039928 1193189 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1209 04:41:25.047252 1193189 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1209 04:41:25.047955 1193189 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1209 04:41:25.048349 1193189 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1209 04:41:25.184171 1193189 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1209 04:41:25.184286 1193189 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1209 04:45:25.184394 1193189 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000314916s
	I1209 04:45:25.184418 1193189 kubeadm.go:319] 
	I1209 04:45:25.184509 1193189 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1209 04:45:25.184553 1193189 kubeadm.go:319] 	- The kubelet is not running
	I1209 04:45:25.184657 1193189 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1209 04:45:25.184661 1193189 kubeadm.go:319] 
	I1209 04:45:25.184765 1193189 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1209 04:45:25.184796 1193189 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1209 04:45:25.184826 1193189 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1209 04:45:25.184829 1193189 kubeadm.go:319] 
	I1209 04:45:25.188658 1193189 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1209 04:45:25.189080 1193189 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1209 04:45:25.189188 1193189 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1209 04:45:25.189440 1193189 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1209 04:45:25.189444 1193189 kubeadm.go:319] 
	I1209 04:45:25.189512 1193189 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1209 04:45:25.189563 1193189 kubeadm.go:403] duration metric: took 12m9.073031305s to StartCluster
	I1209 04:45:25.189594 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:45:25.189654 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:45:25.214653 1193189 cri.go:89] found id: ""
	I1209 04:45:25.214667 1193189 logs.go:282] 0 containers: []
	W1209 04:45:25.214674 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:45:25.214680 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:45:25.214745 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:45:25.239781 1193189 cri.go:89] found id: ""
	I1209 04:45:25.239795 1193189 logs.go:282] 0 containers: []
	W1209 04:45:25.239802 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:45:25.239806 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:45:25.239865 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:45:25.263923 1193189 cri.go:89] found id: ""
	I1209 04:45:25.263937 1193189 logs.go:282] 0 containers: []
	W1209 04:45:25.263943 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:45:25.263949 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:45:25.264009 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:45:25.289497 1193189 cri.go:89] found id: ""
	I1209 04:45:25.289510 1193189 logs.go:282] 0 containers: []
	W1209 04:45:25.289521 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:45:25.289527 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:45:25.289587 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:45:25.314477 1193189 cri.go:89] found id: ""
	I1209 04:45:25.314491 1193189 logs.go:282] 0 containers: []
	W1209 04:45:25.314497 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:45:25.314502 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:45:25.314564 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:45:25.343027 1193189 cri.go:89] found id: ""
	I1209 04:45:25.343041 1193189 logs.go:282] 0 containers: []
	W1209 04:45:25.343048 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:45:25.343054 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:45:25.343116 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:45:25.372137 1193189 cri.go:89] found id: ""
	I1209 04:45:25.372151 1193189 logs.go:282] 0 containers: []
	W1209 04:45:25.372158 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:45:25.372166 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:45:25.372175 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:45:25.430985 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:45:25.431004 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:45:25.448709 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:45:25.448726 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:45:25.515693 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:45:25.506884   20840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:25.507687   20840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:25.509338   20840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:25.509652   20840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:25.511142   20840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:45:25.506884   20840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:25.507687   20840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:25.509338   20840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:25.509652   20840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:25.511142   20840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:45:25.515704 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:45:25.515716 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:45:25.578666 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:45:25.578686 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1209 04:45:25.609638 1193189 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000314916s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1209 04:45:25.609683 1193189 out.go:285] * 
	W1209 04:45:25.609743 1193189 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000314916s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1209 04:45:25.609756 1193189 out.go:285] * 
	W1209 04:45:25.611848 1193189 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 04:45:25.617063 1193189 out.go:203] 
	W1209 04:45:25.620790 1193189 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000314916s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1209 04:45:25.620840 1193189 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1209 04:45:25.620858 1193189 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1209 04:45:25.624102 1193189 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 09 04:45:34 functional-667319 containerd[9667]: time="2025-12-09T04:45:34.304901430Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-667319\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 09 04:45:35 functional-667319 containerd[9667]: time="2025-12-09T04:45:35.122240132Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-667319\""
	Dec 09 04:45:35 functional-667319 containerd[9667]: time="2025-12-09T04:45:35.124969737Z" level=info msg="ImageDelete event name:\"docker.io/kicbase/echo-server:functional-667319\""
	Dec 09 04:45:35 functional-667319 containerd[9667]: time="2025-12-09T04:45:35.127379711Z" level=info msg="ImageDelete event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\""
	Dec 09 04:45:35 functional-667319 containerd[9667]: time="2025-12-09T04:45:35.136337700Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-667319\" returns successfully"
	Dec 09 04:45:35 functional-667319 containerd[9667]: time="2025-12-09T04:45:35.369054172Z" level=info msg="No images store for sha256:dd3309dec5df27eec01ab59220514c77e78d9b5409234aefaeee1c6a1c609658"
	Dec 09 04:45:35 functional-667319 containerd[9667]: time="2025-12-09T04:45:35.371319041Z" level=info msg="ImageCreate event name:\"docker.io/kicbase/echo-server:functional-667319\""
	Dec 09 04:45:35 functional-667319 containerd[9667]: time="2025-12-09T04:45:35.378181758Z" level=info msg="ImageCreate event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 09 04:45:35 functional-667319 containerd[9667]: time="2025-12-09T04:45:35.378777478Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-667319\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 09 04:45:36 functional-667319 containerd[9667]: time="2025-12-09T04:45:36.438263871Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-667319\""
	Dec 09 04:45:36 functional-667319 containerd[9667]: time="2025-12-09T04:45:36.441243432Z" level=info msg="ImageDelete event name:\"docker.io/kicbase/echo-server:functional-667319\""
	Dec 09 04:45:36 functional-667319 containerd[9667]: time="2025-12-09T04:45:36.443329590Z" level=info msg="ImageDelete event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\""
	Dec 09 04:45:36 functional-667319 containerd[9667]: time="2025-12-09T04:45:36.451953736Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-667319\" returns successfully"
	Dec 09 04:45:36 functional-667319 containerd[9667]: time="2025-12-09T04:45:36.699722091Z" level=info msg="No images store for sha256:dd3309dec5df27eec01ab59220514c77e78d9b5409234aefaeee1c6a1c609658"
	Dec 09 04:45:36 functional-667319 containerd[9667]: time="2025-12-09T04:45:36.702120561Z" level=info msg="ImageCreate event name:\"docker.io/kicbase/echo-server:functional-667319\""
	Dec 09 04:45:36 functional-667319 containerd[9667]: time="2025-12-09T04:45:36.709091689Z" level=info msg="ImageCreate event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 09 04:45:36 functional-667319 containerd[9667]: time="2025-12-09T04:45:36.709423744Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-667319\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 09 04:45:37 functional-667319 containerd[9667]: time="2025-12-09T04:45:37.473128393Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-667319\""
	Dec 09 04:45:37 functional-667319 containerd[9667]: time="2025-12-09T04:45:37.475631173Z" level=info msg="ImageDelete event name:\"docker.io/kicbase/echo-server:functional-667319\""
	Dec 09 04:45:37 functional-667319 containerd[9667]: time="2025-12-09T04:45:37.477592221Z" level=info msg="ImageDelete event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\""
	Dec 09 04:45:37 functional-667319 containerd[9667]: time="2025-12-09T04:45:37.490276659Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-667319\" returns successfully"
	Dec 09 04:45:38 functional-667319 containerd[9667]: time="2025-12-09T04:45:38.148598616Z" level=info msg="No images store for sha256:904ceb29077e75bbca4483a04b0d4e97cdb7c2e3a6b6f3f1bb70ace08229b0b3"
	Dec 09 04:45:38 functional-667319 containerd[9667]: time="2025-12-09T04:45:38.150763877Z" level=info msg="ImageCreate event name:\"docker.io/kicbase/echo-server:functional-667319\""
	Dec 09 04:45:38 functional-667319 containerd[9667]: time="2025-12-09T04:45:38.160850013Z" level=info msg="ImageCreate event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 09 04:45:38 functional-667319 containerd[9667]: time="2025-12-09T04:45:38.161468872Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-667319\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:47:25.846959   23136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:47:25.847713   23136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:47:25.849333   23136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:47:25.849836   23136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:47:25.851498   23136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 9 03:13] overlayfs: idmapped layers are currently not supported
	[ +25.904254] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:14] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:16] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:18] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:19] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:30] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:32] overlayfs: idmapped layers are currently not supported
	[ +28.114653] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:33] overlayfs: idmapped layers are currently not supported
	[ +23.720849] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:34] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:35] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:36] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:37] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:38] overlayfs: idmapped layers are currently not supported
	[ +23.656275] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:39] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:57] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:58] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:00] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:02] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:03] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:05] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:06] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 04:47:25 up  7:29,  0 user,  load average: 0.12, 0.21, 0.44
	Linux functional-667319 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 09 04:47:22 functional-667319 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 09 04:47:23 functional-667319 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 477.
	Dec 09 04:47:23 functional-667319 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:47:23 functional-667319 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:47:23 functional-667319 kubelet[23021]: E1209 04:47:23.230660   23021 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 09 04:47:23 functional-667319 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 09 04:47:23 functional-667319 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 09 04:47:23 functional-667319 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 478.
	Dec 09 04:47:23 functional-667319 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:47:23 functional-667319 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:47:23 functional-667319 kubelet[23026]: E1209 04:47:23.975425   23026 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 09 04:47:23 functional-667319 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 09 04:47:23 functional-667319 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 09 04:47:24 functional-667319 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 479.
	Dec 09 04:47:24 functional-667319 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:47:24 functional-667319 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:47:24 functional-667319 kubelet[23032]: E1209 04:47:24.738798   23032 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 09 04:47:24 functional-667319 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 09 04:47:24 functional-667319 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 09 04:47:25 functional-667319 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 480.
	Dec 09 04:47:25 functional-667319 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:47:25 functional-667319 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:47:25 functional-667319 kubelet[23052]: E1209 04:47:25.489317   23052 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 09 04:47:25 functional-667319 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 09 04:47:25 functional-667319 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-667319 -n functional-667319
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-667319 -n functional-667319: exit status 2 (351.617219ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-667319" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (2.34s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (241.64s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1209 04:45:54.562900 1144231 retry.go:31] will retry after 3.841151273s: Temporary Error: Get "http://10.98.178.96": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
E1209 04:46:06.732382 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/addons-221952/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1209 04:46:08.404873 1144231 retry.go:31] will retry after 5.755704353s: Temporary Error: Get "http://10.98.178.96": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1209 04:46:24.161003 1144231 retry.go:31] will retry after 5.285482724s: Temporary Error: Get "http://10.98.178.96": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1209 04:46:39.447661 1144231 retry.go:31] will retry after 9.872874903s: Temporary Error: Get "http://10.98.178.96": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1209 04:46:59.322182 1144231 retry.go:31] will retry after 14.792134092s: Temporary Error: Get "http://10.98.178.96": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
E1209 04:47:38.985929 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-717497/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
E1209 04:49:09.814637 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/addons-221952/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test_pvc_test.go:50: ***** TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: pod "integration-test=storage-provisioner" failed to start within 4m0s: context deadline exceeded ****
functional_test_pvc_test.go:50: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-667319 -n functional-667319
functional_test_pvc_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-667319 -n functional-667319: exit status 2 (314.715341ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
functional_test_pvc_test.go:50: status error: exit status 2 (may be ok)
functional_test_pvc_test.go:50: "functional-667319" apiserver is not running, skipping kubectl commands (state="Stopped")
functional_test_pvc_test.go:51: failed waiting for storage-provisioner: integration-test=storage-provisioner within 4m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-667319
helpers_test.go:243: (dbg) docker inspect functional-667319:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e5b6511799c8d5c445a335a3bd5cc9a61b518fc27ac93dad8800da366ef32129",
	        "Created": "2025-12-09T04:18:34.060957311Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1182075,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-09T04:18:34.126944158Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:e4eb91ed18a24161fce60c7cdd660144ecd5b8c5029dc2dea2c5e423c2f48ce4",
	        "ResolvConfPath": "/var/lib/docker/containers/e5b6511799c8d5c445a335a3bd5cc9a61b518fc27ac93dad8800da366ef32129/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e5b6511799c8d5c445a335a3bd5cc9a61b518fc27ac93dad8800da366ef32129/hostname",
	        "HostsPath": "/var/lib/docker/containers/e5b6511799c8d5c445a335a3bd5cc9a61b518fc27ac93dad8800da366ef32129/hosts",
	        "LogPath": "/var/lib/docker/containers/e5b6511799c8d5c445a335a3bd5cc9a61b518fc27ac93dad8800da366ef32129/e5b6511799c8d5c445a335a3bd5cc9a61b518fc27ac93dad8800da366ef32129-json.log",
	        "Name": "/functional-667319",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-667319:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-667319",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e5b6511799c8d5c445a335a3bd5cc9a61b518fc27ac93dad8800da366ef32129",
	                "LowerDir": "/var/lib/docker/overlay2/b0239006282b6e4609a1f554d0a3fb94c749a13505795c8e4078cb2db194e8e0-init/diff:/var/lib/docker/overlay2/c44bb57aa59cc265266f37f2bb6e7ec0e7d641c3b4aeaa57e6d23deec6f0d1d4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b0239006282b6e4609a1f554d0a3fb94c749a13505795c8e4078cb2db194e8e0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b0239006282b6e4609a1f554d0a3fb94c749a13505795c8e4078cb2db194e8e0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b0239006282b6e4609a1f554d0a3fb94c749a13505795c8e4078cb2db194e8e0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-667319",
	                "Source": "/var/lib/docker/volumes/functional-667319/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-667319",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-667319",
	                "name.minikube.sigs.k8s.io": "functional-667319",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7c81dabcd9e57af9bce0bc0f5619f6ef3a27af43f4b649283a5bd778ab256415",
	            "SandboxKey": "/var/run/docker/netns/7c81dabcd9e5",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33900"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33901"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33904"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33902"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33903"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-667319": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "fe:40:bd:46:56:d8",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "88b3a65de70c15005c532a44219284d4df94e474ca5b78b04514c2f932b03beb",
	                    "EndpointID": "bdef7b156f4a28c1f641ae70b42db2750bb810ae6fe93fd65325e62eb232fe91",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-667319",
	                        "e5b6511799c8"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-667319 -n functional-667319
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-667319 -n functional-667319: exit status 2 (314.790875ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-667319 logs -n 25
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                        ARGS                                                                         │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-667319 ssh findmnt -T /mount1                                                                                                            │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:45 UTC │ 09 Dec 25 04:45 UTC │
	│ ssh            │ functional-667319 ssh findmnt -T /mount2                                                                                                            │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:45 UTC │ 09 Dec 25 04:45 UTC │
	│ ssh            │ functional-667319 ssh findmnt -T /mount3                                                                                                            │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:45 UTC │ 09 Dec 25 04:45 UTC │
	│ mount          │ -p functional-667319 --kill=true                                                                                                                    │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:45 UTC │                     │
	│ addons         │ functional-667319 addons list                                                                                                                       │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:47 UTC │ 09 Dec 25 04:47 UTC │
	│ addons         │ functional-667319 addons list -o json                                                                                                               │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:47 UTC │ 09 Dec 25 04:47 UTC │
	│ service        │ functional-667319 service list                                                                                                                      │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:47 UTC │                     │
	│ service        │ functional-667319 service list -o json                                                                                                              │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:47 UTC │                     │
	│ service        │ functional-667319 service --namespace=default --https --url hello-node                                                                              │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:47 UTC │                     │
	│ service        │ functional-667319 service hello-node --url --format={{.IP}}                                                                                         │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:47 UTC │                     │
	│ service        │ functional-667319 service hello-node --url                                                                                                          │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:47 UTC │                     │
	│ start          │ -p functional-667319 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:47 UTC │                     │
	│ start          │ -p functional-667319 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0           │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:47 UTC │                     │
	│ start          │ -p functional-667319 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:47 UTC │                     │
	│ dashboard      │ --url --port 36195 -p functional-667319 --alsologtostderr -v=1                                                                                      │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:47 UTC │                     │
	│ image          │ functional-667319 image ls --format short --alsologtostderr                                                                                         │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:47 UTC │ 09 Dec 25 04:47 UTC │
	│ image          │ functional-667319 image ls --format yaml --alsologtostderr                                                                                          │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:47 UTC │ 09 Dec 25 04:47 UTC │
	│ ssh            │ functional-667319 ssh pgrep buildkitd                                                                                                               │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:47 UTC │                     │
	│ image          │ functional-667319 image build -t localhost/my-image:functional-667319 testdata/build --alsologtostderr                                              │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:47 UTC │ 09 Dec 25 04:47 UTC │
	│ image          │ functional-667319 image ls                                                                                                                          │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:47 UTC │ 09 Dec 25 04:47 UTC │
	│ image          │ functional-667319 image ls --format json --alsologtostderr                                                                                          │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:47 UTC │ 09 Dec 25 04:47 UTC │
	│ image          │ functional-667319 image ls --format table --alsologtostderr                                                                                         │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:47 UTC │ 09 Dec 25 04:47 UTC │
	│ update-context │ functional-667319 update-context --alsologtostderr -v=2                                                                                             │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:47 UTC │ 09 Dec 25 04:47 UTC │
	│ update-context │ functional-667319 update-context --alsologtostderr -v=2                                                                                             │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:47 UTC │ 09 Dec 25 04:47 UTC │
	│ update-context │ functional-667319 update-context --alsologtostderr -v=2                                                                                             │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:47 UTC │ 09 Dec 25 04:47 UTC │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/09 04:47:34
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 04:47:34.321713 1211929 out.go:360] Setting OutFile to fd 1 ...
	I1209 04:47:34.321929 1211929 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 04:47:34.321959 1211929 out.go:374] Setting ErrFile to fd 2...
	I1209 04:47:34.321979 1211929 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 04:47:34.322391 1211929 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-1142328/.minikube/bin
	I1209 04:47:34.322830 1211929 out.go:368] Setting JSON to false
	I1209 04:47:34.323745 1211929 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":26978,"bootTime":1765228677,"procs":158,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1209 04:47:34.323845 1211929 start.go:143] virtualization:  
	I1209 04:47:34.327171 1211929 out.go:179] * [functional-667319] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1209 04:47:34.331051 1211929 notify.go:221] Checking for updates...
	I1209 04:47:34.331388 1211929 out.go:179]   - MINIKUBE_LOCATION=22081
	I1209 04:47:34.334585 1211929 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 04:47:34.337483 1211929 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22081-1142328/kubeconfig
	I1209 04:47:34.340345 1211929 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-1142328/.minikube
	I1209 04:47:34.343178 1211929 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1209 04:47:34.346040 1211929 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 04:47:34.349423 1211929 config.go:182] Loaded profile config "functional-667319": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1209 04:47:34.349977 1211929 driver.go:422] Setting default libvirt URI to qemu:///system
	I1209 04:47:34.375111 1211929 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1209 04:47:34.375225 1211929 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 04:47:34.453332 1211929 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-09 04:47:34.434527916 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1209 04:47:34.453454 1211929 docker.go:319] overlay module found
	I1209 04:47:34.456344 1211929 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1209 04:47:34.459148 1211929 start.go:309] selected driver: docker
	I1209 04:47:34.459167 1211929 start.go:927] validating driver "docker" against &{Name:functional-667319 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-667319 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 04:47:34.459255 1211929 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 04:47:34.462988 1211929 out.go:203] 
	W1209 04:47:34.465967 1211929 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1209 04:47:34.469187 1211929 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 09 04:45:35 functional-667319 containerd[9667]: time="2025-12-09T04:45:35.378181758Z" level=info msg="ImageCreate event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 09 04:45:35 functional-667319 containerd[9667]: time="2025-12-09T04:45:35.378777478Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-667319\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 09 04:45:36 functional-667319 containerd[9667]: time="2025-12-09T04:45:36.438263871Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-667319\""
	Dec 09 04:45:36 functional-667319 containerd[9667]: time="2025-12-09T04:45:36.441243432Z" level=info msg="ImageDelete event name:\"docker.io/kicbase/echo-server:functional-667319\""
	Dec 09 04:45:36 functional-667319 containerd[9667]: time="2025-12-09T04:45:36.443329590Z" level=info msg="ImageDelete event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\""
	Dec 09 04:45:36 functional-667319 containerd[9667]: time="2025-12-09T04:45:36.451953736Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-667319\" returns successfully"
	Dec 09 04:45:36 functional-667319 containerd[9667]: time="2025-12-09T04:45:36.699722091Z" level=info msg="No images store for sha256:dd3309dec5df27eec01ab59220514c77e78d9b5409234aefaeee1c6a1c609658"
	Dec 09 04:45:36 functional-667319 containerd[9667]: time="2025-12-09T04:45:36.702120561Z" level=info msg="ImageCreate event name:\"docker.io/kicbase/echo-server:functional-667319\""
	Dec 09 04:45:36 functional-667319 containerd[9667]: time="2025-12-09T04:45:36.709091689Z" level=info msg="ImageCreate event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 09 04:45:36 functional-667319 containerd[9667]: time="2025-12-09T04:45:36.709423744Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-667319\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 09 04:45:37 functional-667319 containerd[9667]: time="2025-12-09T04:45:37.473128393Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-667319\""
	Dec 09 04:45:37 functional-667319 containerd[9667]: time="2025-12-09T04:45:37.475631173Z" level=info msg="ImageDelete event name:\"docker.io/kicbase/echo-server:functional-667319\""
	Dec 09 04:45:37 functional-667319 containerd[9667]: time="2025-12-09T04:45:37.477592221Z" level=info msg="ImageDelete event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\""
	Dec 09 04:45:37 functional-667319 containerd[9667]: time="2025-12-09T04:45:37.490276659Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-667319\" returns successfully"
	Dec 09 04:45:38 functional-667319 containerd[9667]: time="2025-12-09T04:45:38.148598616Z" level=info msg="No images store for sha256:904ceb29077e75bbca4483a04b0d4e97cdb7c2e3a6b6f3f1bb70ace08229b0b3"
	Dec 09 04:45:38 functional-667319 containerd[9667]: time="2025-12-09T04:45:38.150763877Z" level=info msg="ImageCreate event name:\"docker.io/kicbase/echo-server:functional-667319\""
	Dec 09 04:45:38 functional-667319 containerd[9667]: time="2025-12-09T04:45:38.160850013Z" level=info msg="ImageCreate event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 09 04:45:38 functional-667319 containerd[9667]: time="2025-12-09T04:45:38.161468872Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-667319\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 09 04:47:40 functional-667319 containerd[9667]: time="2025-12-09T04:47:40.179087907Z" level=info msg="connecting to shim sfot1esu6t2s5w4rldumofvef" address="unix:///run/containerd/s/2429490321ba2310d1e50d6f8129304e44129622857c15a860e887cc666b9ccc" namespace=k8s.io protocol=ttrpc version=3
	Dec 09 04:47:40 functional-667319 containerd[9667]: time="2025-12-09T04:47:40.257507086Z" level=info msg="shim disconnected" id=sfot1esu6t2s5w4rldumofvef namespace=k8s.io
	Dec 09 04:47:40 functional-667319 containerd[9667]: time="2025-12-09T04:47:40.257550030Z" level=info msg="cleaning up after shim disconnected" id=sfot1esu6t2s5w4rldumofvef namespace=k8s.io
	Dec 09 04:47:40 functional-667319 containerd[9667]: time="2025-12-09T04:47:40.257561690Z" level=info msg="cleaning up dead shim" id=sfot1esu6t2s5w4rldumofvef namespace=k8s.io
	Dec 09 04:47:40 functional-667319 containerd[9667]: time="2025-12-09T04:47:40.511270996Z" level=info msg="ImageCreate event name:\"localhost/my-image:functional-667319\""
	Dec 09 04:47:40 functional-667319 containerd[9667]: time="2025-12-09T04:47:40.516697117Z" level=info msg="ImageCreate event name:\"sha256:8d19d3a32a56b2fa24160dc46919201cf0064b6c0d0fc7f41ab4eafa4fa50f4b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 09 04:47:40 functional-667319 containerd[9667]: time="2025-12-09T04:47:40.517093588Z" level=info msg="ImageUpdate event name:\"localhost/my-image:functional-667319\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:49:52.813505   25148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:49:52.814105   25148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:49:52.815793   25148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:49:52.816458   25148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:49:52.818061   25148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 9 03:13] overlayfs: idmapped layers are currently not supported
	[ +25.904254] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:14] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:16] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:18] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:19] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:30] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:32] overlayfs: idmapped layers are currently not supported
	[ +28.114653] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:33] overlayfs: idmapped layers are currently not supported
	[ +23.720849] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:34] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:35] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:36] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:37] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:38] overlayfs: idmapped layers are currently not supported
	[ +23.656275] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:39] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:57] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:58] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:00] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:02] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:03] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:05] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:06] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 04:49:52 up  7:31,  0 user,  load average: 0.26, 0.30, 0.45
	Linux functional-667319 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 09 04:49:49 functional-667319 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 09 04:49:50 functional-667319 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 673.
	Dec 09 04:49:50 functional-667319 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:49:50 functional-667319 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:49:50 functional-667319 kubelet[25018]: E1209 04:49:50.225712   25018 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 09 04:49:50 functional-667319 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 09 04:49:50 functional-667319 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 09 04:49:50 functional-667319 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 674.
	Dec 09 04:49:50 functional-667319 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:49:50 functional-667319 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:49:50 functional-667319 kubelet[25024]: E1209 04:49:50.978387   25024 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 09 04:49:50 functional-667319 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 09 04:49:50 functional-667319 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 09 04:49:51 functional-667319 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 675.
	Dec 09 04:49:51 functional-667319 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:49:51 functional-667319 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:49:51 functional-667319 kubelet[25029]: E1209 04:49:51.759960   25029 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 09 04:49:51 functional-667319 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 09 04:49:51 functional-667319 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 09 04:49:52 functional-667319 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 676.
	Dec 09 04:49:52 functional-667319 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:49:52 functional-667319 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:49:52 functional-667319 kubelet[25066]: E1209 04:49:52.487811   25066 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 09 04:49:52 functional-667319 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 09 04:49:52 functional-667319 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-667319 -n functional-667319
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-667319 -n functional-667319: exit status 2 (339.786017ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-667319" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (241.64s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (2.18s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-667319 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
functional_test.go:234: (dbg) Non-zero exit: kubectl --context functional-667319 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": exit status 1 (64.642547ms)

                                                
                                                
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:236: failed to 'kubectl get nodes' with args "kubectl --context functional-667319 get nodes --output=go-template \"--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'\"": exit status 1
functional_test.go:242: expected to have label "minikube.k8s.io/commit" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/version" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/updated_at" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/name" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/primary" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-667319
helpers_test.go:243: (dbg) docker inspect functional-667319:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e5b6511799c8d5c445a335a3bd5cc9a61b518fc27ac93dad8800da366ef32129",
	        "Created": "2025-12-09T04:18:34.060957311Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1182075,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-09T04:18:34.126944158Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:e4eb91ed18a24161fce60c7cdd660144ecd5b8c5029dc2dea2c5e423c2f48ce4",
	        "ResolvConfPath": "/var/lib/docker/containers/e5b6511799c8d5c445a335a3bd5cc9a61b518fc27ac93dad8800da366ef32129/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e5b6511799c8d5c445a335a3bd5cc9a61b518fc27ac93dad8800da366ef32129/hostname",
	        "HostsPath": "/var/lib/docker/containers/e5b6511799c8d5c445a335a3bd5cc9a61b518fc27ac93dad8800da366ef32129/hosts",
	        "LogPath": "/var/lib/docker/containers/e5b6511799c8d5c445a335a3bd5cc9a61b518fc27ac93dad8800da366ef32129/e5b6511799c8d5c445a335a3bd5cc9a61b518fc27ac93dad8800da366ef32129-json.log",
	        "Name": "/functional-667319",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-667319:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-667319",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e5b6511799c8d5c445a335a3bd5cc9a61b518fc27ac93dad8800da366ef32129",
	                "LowerDir": "/var/lib/docker/overlay2/b0239006282b6e4609a1f554d0a3fb94c749a13505795c8e4078cb2db194e8e0-init/diff:/var/lib/docker/overlay2/c44bb57aa59cc265266f37f2bb6e7ec0e7d641c3b4aeaa57e6d23deec6f0d1d4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b0239006282b6e4609a1f554d0a3fb94c749a13505795c8e4078cb2db194e8e0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b0239006282b6e4609a1f554d0a3fb94c749a13505795c8e4078cb2db194e8e0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b0239006282b6e4609a1f554d0a3fb94c749a13505795c8e4078cb2db194e8e0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-667319",
	                "Source": "/var/lib/docker/volumes/functional-667319/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-667319",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-667319",
	                "name.minikube.sigs.k8s.io": "functional-667319",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7c81dabcd9e57af9bce0bc0f5619f6ef3a27af43f4b649283a5bd778ab256415",
	            "SandboxKey": "/var/run/docker/netns/7c81dabcd9e5",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33900"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33901"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33904"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33902"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33903"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-667319": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "fe:40:bd:46:56:d8",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "88b3a65de70c15005c532a44219284d4df94e474ca5b78b04514c2f932b03beb",
	                    "EndpointID": "bdef7b156f4a28c1f641ae70b42db2750bb810ae6fe93fd65325e62eb232fe91",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-667319",
	                        "e5b6511799c8"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-667319 -n functional-667319
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-667319 -n functional-667319: exit status 2 (335.622147ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-667319 logs -n 25
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                              ARGS                                                                               │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ kubectl │ functional-667319 kubectl -- --context functional-667319 get pods                                                                                               │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:33 UTC │                     │
	│ start   │ -p functional-667319 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                                                        │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:33 UTC │                     │
	│ config  │ functional-667319 config unset cpus                                                                                                                             │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:45 UTC │ 09 Dec 25 04:45 UTC │
	│ config  │ functional-667319 config get cpus                                                                                                                               │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:45 UTC │                     │
	│ config  │ functional-667319 config set cpus 2                                                                                                                             │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:45 UTC │ 09 Dec 25 04:45 UTC │
	│ config  │ functional-667319 config get cpus                                                                                                                               │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:45 UTC │ 09 Dec 25 04:45 UTC │
	│ config  │ functional-667319 config unset cpus                                                                                                                             │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:45 UTC │ 09 Dec 25 04:45 UTC │
	│ tunnel  │ functional-667319 tunnel --alsologtostderr                                                                                                                      │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:45 UTC │                     │
	│ tunnel  │ functional-667319 tunnel --alsologtostderr                                                                                                                      │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:45 UTC │                     │
	│ config  │ functional-667319 config get cpus                                                                                                                               │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:45 UTC │                     │
	│ ssh     │ functional-667319 ssh sudo systemctl is-active docker                                                                                                           │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:45 UTC │                     │
	│ tunnel  │ functional-667319 tunnel --alsologtostderr                                                                                                                      │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:45 UTC │                     │
	│ ssh     │ functional-667319 ssh sudo systemctl is-active crio                                                                                                             │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:45 UTC │                     │
	│ image   │ functional-667319 image load --daemon kicbase/echo-server:functional-667319 --alsologtostderr                                                                   │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:45 UTC │ 09 Dec 25 04:45 UTC │
	│ image   │ functional-667319 image ls                                                                                                                                      │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:45 UTC │ 09 Dec 25 04:45 UTC │
	│ image   │ functional-667319 image load --daemon kicbase/echo-server:functional-667319 --alsologtostderr                                                                   │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:45 UTC │ 09 Dec 25 04:45 UTC │
	│ image   │ functional-667319 image ls                                                                                                                                      │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:45 UTC │ 09 Dec 25 04:45 UTC │
	│ image   │ functional-667319 image load --daemon kicbase/echo-server:functional-667319 --alsologtostderr                                                                   │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:45 UTC │ 09 Dec 25 04:45 UTC │
	│ image   │ functional-667319 image ls                                                                                                                                      │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:45 UTC │ 09 Dec 25 04:45 UTC │
	│ image   │ functional-667319 image save kicbase/echo-server:functional-667319 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:45 UTC │ 09 Dec 25 04:45 UTC │
	│ image   │ functional-667319 image rm kicbase/echo-server:functional-667319 --alsologtostderr                                                                              │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:45 UTC │ 09 Dec 25 04:45 UTC │
	│ image   │ functional-667319 image ls                                                                                                                                      │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:45 UTC │ 09 Dec 25 04:45 UTC │
	│ image   │ functional-667319 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr                                       │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:45 UTC │ 09 Dec 25 04:45 UTC │
	│ image   │ functional-667319 image ls                                                                                                                                      │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:45 UTC │ 09 Dec 25 04:45 UTC │
	│ image   │ functional-667319 image save --daemon kicbase/echo-server:functional-667319 --alsologtostderr                                                                   │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:45 UTC │ 09 Dec 25 04:45 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/09 04:33:11
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 04:33:11.365325 1193189 out.go:360] Setting OutFile to fd 1 ...
	I1209 04:33:11.365424 1193189 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 04:33:11.365428 1193189 out.go:374] Setting ErrFile to fd 2...
	I1209 04:33:11.365431 1193189 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 04:33:11.365670 1193189 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-1142328/.minikube/bin
	I1209 04:33:11.366033 1193189 out.go:368] Setting JSON to false
	I1209 04:33:11.366848 1193189 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":26115,"bootTime":1765228677,"procs":156,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1209 04:33:11.366902 1193189 start.go:143] virtualization:  
	I1209 04:33:11.370321 1193189 out.go:179] * [functional-667319] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1209 04:33:11.373998 1193189 out.go:179]   - MINIKUBE_LOCATION=22081
	I1209 04:33:11.374082 1193189 notify.go:221] Checking for updates...
	I1209 04:33:11.379822 1193189 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 04:33:11.382611 1193189 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22081-1142328/kubeconfig
	I1209 04:33:11.385432 1193189 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-1142328/.minikube
	I1209 04:33:11.388728 1193189 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1209 04:33:11.391441 1193189 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 04:33:11.394813 1193189 config.go:182] Loaded profile config "functional-667319": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1209 04:33:11.394910 1193189 driver.go:422] Setting default libvirt URI to qemu:///system
	I1209 04:33:11.422551 1193189 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1209 04:33:11.422654 1193189 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 04:33:11.481358 1193189 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-09 04:33:11.472506561 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1209 04:33:11.481459 1193189 docker.go:319] overlay module found
	I1209 04:33:11.484471 1193189 out.go:179] * Using the docker driver based on existing profile
	I1209 04:33:11.487406 1193189 start.go:309] selected driver: docker
	I1209 04:33:11.487427 1193189 start.go:927] validating driver "docker" against &{Name:functional-667319 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-667319 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disa
bleCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 04:33:11.487512 1193189 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 04:33:11.487612 1193189 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 04:33:11.542290 1193189 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-09 04:33:11.533632532 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1209 04:33:11.542703 1193189 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 04:33:11.542726 1193189 cni.go:84] Creating CNI manager for ""
	I1209 04:33:11.542784 1193189 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1209 04:33:11.542826 1193189 start.go:353] cluster config:
	{Name:functional-667319 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-667319 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 04:33:11.546045 1193189 out.go:179] * Starting "functional-667319" primary control-plane node in "functional-667319" cluster
	I1209 04:33:11.548925 1193189 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1209 04:33:11.551638 1193189 out.go:179] * Pulling base image v0.0.48-1765184860-22066 ...
	I1209 04:33:11.554609 1193189 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1209 04:33:11.554645 1193189 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1209 04:33:11.554670 1193189 cache.go:65] Caching tarball of preloaded images
	I1209 04:33:11.554693 1193189 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local docker daemon
	I1209 04:33:11.554756 1193189 preload.go:238] Found /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1209 04:33:11.554765 1193189 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1209 04:33:11.554868 1193189 profile.go:143] Saving config to /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/config.json ...
	I1209 04:33:11.573683 1193189 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local docker daemon, skipping pull
	I1209 04:33:11.573695 1193189 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c exists in daemon, skipping load
	I1209 04:33:11.573713 1193189 cache.go:243] Successfully downloaded all kic artifacts
	I1209 04:33:11.573740 1193189 start.go:360] acquireMachinesLock for functional-667319: {Name:mk6c31f0747796f5f8ac8ea1653d6ee60fe2a47d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 04:33:11.573797 1193189 start.go:364] duration metric: took 42.739µs to acquireMachinesLock for "functional-667319"
	I1209 04:33:11.573815 1193189 start.go:96] Skipping create...Using existing machine configuration
	I1209 04:33:11.573819 1193189 fix.go:54] fixHost starting: 
	I1209 04:33:11.574074 1193189 cli_runner.go:164] Run: docker container inspect functional-667319 --format={{.State.Status}}
	I1209 04:33:11.589947 1193189 fix.go:112] recreateIfNeeded on functional-667319: state=Running err=<nil>
	W1209 04:33:11.589973 1193189 fix.go:138] unexpected machine state, will restart: <nil>
	I1209 04:33:11.593148 1193189 out.go:252] * Updating the running docker "functional-667319" container ...
	I1209 04:33:11.593168 1193189 machine.go:94] provisionDockerMachine start ...
	I1209 04:33:11.593256 1193189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-667319
	I1209 04:33:11.609392 1193189 main.go:143] libmachine: Using SSH client type: native
	I1209 04:33:11.609722 1193189 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 33900 <nil> <nil>}
	I1209 04:33:11.609729 1193189 main.go:143] libmachine: About to run SSH command:
	hostname
	I1209 04:33:11.759408 1193189 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-667319
	
	I1209 04:33:11.759422 1193189 ubuntu.go:182] provisioning hostname "functional-667319"
	I1209 04:33:11.759483 1193189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-667319
	I1209 04:33:11.776859 1193189 main.go:143] libmachine: Using SSH client type: native
	I1209 04:33:11.777189 1193189 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 33900 <nil> <nil>}
	I1209 04:33:11.777198 1193189 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-667319 && echo "functional-667319" | sudo tee /etc/hostname
	I1209 04:33:11.939211 1193189 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-667319
	
	I1209 04:33:11.939295 1193189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-667319
	I1209 04:33:11.957143 1193189 main.go:143] libmachine: Using SSH client type: native
	I1209 04:33:11.957494 1193189 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 33900 <nil> <nil>}
	I1209 04:33:11.957508 1193189 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-667319' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-667319/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-667319' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 04:33:12.113237 1193189 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1209 04:33:12.113254 1193189 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22081-1142328/.minikube CaCertPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22081-1142328/.minikube}
	I1209 04:33:12.113278 1193189 ubuntu.go:190] setting up certificates
	I1209 04:33:12.113294 1193189 provision.go:84] configureAuth start
	I1209 04:33:12.113362 1193189 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-667319
	I1209 04:33:12.130912 1193189 provision.go:143] copyHostCerts
	I1209 04:33:12.131003 1193189 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.pem, removing ...
	I1209 04:33:12.131010 1193189 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.pem
	I1209 04:33:12.131086 1193189 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.pem (1078 bytes)
	I1209 04:33:12.131177 1193189 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-1142328/.minikube/cert.pem, removing ...
	I1209 04:33:12.131181 1193189 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-1142328/.minikube/cert.pem
	I1209 04:33:12.131205 1193189 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22081-1142328/.minikube/cert.pem (1123 bytes)
	I1209 04:33:12.131250 1193189 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-1142328/.minikube/key.pem, removing ...
	I1209 04:33:12.131254 1193189 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-1142328/.minikube/key.pem
	I1209 04:33:12.131276 1193189 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22081-1142328/.minikube/key.pem (1675 bytes)
	I1209 04:33:12.131318 1193189 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca-key.pem org=jenkins.functional-667319 san=[127.0.0.1 192.168.49.2 functional-667319 localhost minikube]
	I1209 04:33:12.827484 1193189 provision.go:177] copyRemoteCerts
	I1209 04:33:12.827535 1193189 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 04:33:12.827573 1193189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-667319
	I1209 04:33:12.846654 1193189 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/functional-667319/id_rsa Username:docker}
	I1209 04:33:12.951639 1193189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1209 04:33:12.968320 1193189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1209 04:33:12.985745 1193189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1209 04:33:13.004711 1193189 provision.go:87] duration metric: took 891.395644ms to configureAuth
	I1209 04:33:13.004730 1193189 ubuntu.go:206] setting minikube options for container-runtime
	I1209 04:33:13.005000 1193189 config.go:182] Loaded profile config "functional-667319": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1209 04:33:13.005006 1193189 machine.go:97] duration metric: took 1.411833664s to provisionDockerMachine
	I1209 04:33:13.005012 1193189 start.go:293] postStartSetup for "functional-667319" (driver="docker")
	I1209 04:33:13.005022 1193189 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 04:33:13.005072 1193189 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 04:33:13.005108 1193189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-667319
	I1209 04:33:13.023376 1193189 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/functional-667319/id_rsa Username:docker}
	I1209 04:33:13.128032 1193189 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 04:33:13.131471 1193189 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1209 04:33:13.131490 1193189 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1209 04:33:13.131500 1193189 filesync.go:126] Scanning /home/jenkins/minikube-integration/22081-1142328/.minikube/addons for local assets ...
	I1209 04:33:13.131552 1193189 filesync.go:126] Scanning /home/jenkins/minikube-integration/22081-1142328/.minikube/files for local assets ...
	I1209 04:33:13.131625 1193189 filesync.go:149] local asset: /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/ssl/certs/11442312.pem -> 11442312.pem in /etc/ssl/certs
	I1209 04:33:13.131701 1193189 filesync.go:149] local asset: /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/test/nested/copy/1144231/hosts -> hosts in /etc/test/nested/copy/1144231
	I1209 04:33:13.131749 1193189 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1144231
	I1209 04:33:13.139091 1193189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/ssl/certs/11442312.pem --> /etc/ssl/certs/11442312.pem (1708 bytes)
	I1209 04:33:13.156114 1193189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/test/nested/copy/1144231/hosts --> /etc/test/nested/copy/1144231/hosts (40 bytes)
	I1209 04:33:13.173744 1193189 start.go:296] duration metric: took 168.716821ms for postStartSetup
	I1209 04:33:13.173816 1193189 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1209 04:33:13.173854 1193189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-667319
	I1209 04:33:13.198555 1193189 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/functional-667319/id_rsa Username:docker}
	I1209 04:33:13.300903 1193189 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1209 04:33:13.305102 1193189 fix.go:56] duration metric: took 1.731276319s for fixHost
	I1209 04:33:13.305116 1193189 start.go:83] releasing machines lock for "functional-667319", held for 1.731312428s
	I1209 04:33:13.305216 1193189 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-667319
	I1209 04:33:13.322301 1193189 ssh_runner.go:195] Run: cat /version.json
	I1209 04:33:13.322356 1193189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-667319
	I1209 04:33:13.322602 1193189 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 04:33:13.322654 1193189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-667319
	I1209 04:33:13.345854 1193189 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/functional-667319/id_rsa Username:docker}
	I1209 04:33:13.346808 1193189 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/functional-667319/id_rsa Username:docker}
	I1209 04:33:13.447601 1193189 ssh_runner.go:195] Run: systemctl --version
	I1209 04:33:13.537710 1193189 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 04:33:13.542181 1193189 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 04:33:13.542253 1193189 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 04:33:13.550371 1193189 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1209 04:33:13.550385 1193189 start.go:496] detecting cgroup driver to use...
	I1209 04:33:13.550417 1193189 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1209 04:33:13.550479 1193189 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1209 04:33:13.565987 1193189 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1209 04:33:13.579220 1193189 docker.go:218] disabling cri-docker service (if available) ...
	I1209 04:33:13.579279 1193189 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 04:33:13.594632 1193189 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 04:33:13.607810 1193189 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 04:33:13.745867 1193189 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 04:33:13.855372 1193189 docker.go:234] disabling docker service ...
	I1209 04:33:13.855434 1193189 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 04:33:13.878271 1193189 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 04:33:13.891442 1193189 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 04:33:14.014618 1193189 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 04:33:14.144235 1193189 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 04:33:14.157713 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 04:33:14.171634 1193189 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1209 04:33:14.180595 1193189 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1209 04:33:14.189855 1193189 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1209 04:33:14.189928 1193189 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1209 04:33:14.198663 1193189 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1209 04:33:14.207241 1193189 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1209 04:33:14.215864 1193189 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1209 04:33:14.224572 1193189 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 04:33:14.232585 1193189 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1209 04:33:14.241204 1193189 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1209 04:33:14.249919 1193189 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1209 04:33:14.258812 1193189 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 04:33:14.266241 1193189 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 04:33:14.273587 1193189 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 04:33:14.393428 1193189 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1209 04:33:14.528665 1193189 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1209 04:33:14.528726 1193189 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1209 04:33:14.532955 1193189 start.go:564] Will wait 60s for crictl version
	I1209 04:33:14.533056 1193189 ssh_runner.go:195] Run: which crictl
	I1209 04:33:14.541891 1193189 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1209 04:33:14.570282 1193189 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1209 04:33:14.570350 1193189 ssh_runner.go:195] Run: containerd --version
	I1209 04:33:14.592081 1193189 ssh_runner.go:195] Run: containerd --version
	I1209 04:33:14.617312 1193189 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1209 04:33:14.620294 1193189 cli_runner.go:164] Run: docker network inspect functional-667319 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1209 04:33:14.636105 1193189 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1209 04:33:14.643286 1193189 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1209 04:33:14.646097 1193189 kubeadm.go:884] updating cluster {Name:functional-667319 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-667319 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bina
ryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 04:33:14.646234 1193189 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1209 04:33:14.646312 1193189 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 04:33:14.671604 1193189 containerd.go:627] all images are preloaded for containerd runtime.
	I1209 04:33:14.671615 1193189 containerd.go:534] Images already preloaded, skipping extraction
	I1209 04:33:14.671676 1193189 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 04:33:14.702360 1193189 containerd.go:627] all images are preloaded for containerd runtime.
	I1209 04:33:14.702371 1193189 cache_images.go:86] Images are preloaded, skipping loading
	I1209 04:33:14.702376 1193189 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
	I1209 04:33:14.702482 1193189 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-667319 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-667319 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 04:33:14.702549 1193189 ssh_runner.go:195] Run: sudo crictl info
	I1209 04:33:14.731154 1193189 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1209 04:33:14.731172 1193189 cni.go:84] Creating CNI manager for ""
	I1209 04:33:14.731179 1193189 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1209 04:33:14.731190 1193189 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1209 04:33:14.731212 1193189 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-667319 NodeName:functional-667319 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false Kubel
etConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1209 04:33:14.731316 1193189 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-667319"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 04:33:14.731385 1193189 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1209 04:33:14.742794 1193189 binaries.go:51] Found k8s binaries, skipping transfer
	I1209 04:33:14.742854 1193189 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1209 04:33:14.750345 1193189 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1209 04:33:14.763345 1193189 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1209 04:33:14.775780 1193189 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2087 bytes)
	I1209 04:33:14.788798 1193189 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1209 04:33:14.792560 1193189 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 04:33:14.907792 1193189 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 04:33:15.431459 1193189 certs.go:69] Setting up /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319 for IP: 192.168.49.2
	I1209 04:33:15.431470 1193189 certs.go:195] generating shared ca certs ...
	I1209 04:33:15.431485 1193189 certs.go:227] acquiring lock for ca certs: {Name:mk15788702f8c4e23b5aeab3f44961d296fab259 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 04:33:15.431654 1193189 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.key
	I1209 04:33:15.431695 1193189 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/proxy-client-ca.key
	I1209 04:33:15.431701 1193189 certs.go:257] generating profile certs ...
	I1209 04:33:15.431782 1193189 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/client.key
	I1209 04:33:15.431840 1193189 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/apiserver.key.c80eb595
	I1209 04:33:15.431875 1193189 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/proxy-client.key
	I1209 04:33:15.431982 1193189 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/1144231.pem (1338 bytes)
	W1209 04:33:15.432037 1193189 certs.go:480] ignoring /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/1144231_empty.pem, impossibly tiny 0 bytes
	I1209 04:33:15.432046 1193189 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca-key.pem (1679 bytes)
	I1209 04:33:15.432075 1193189 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem (1078 bytes)
	I1209 04:33:15.432099 1193189 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/cert.pem (1123 bytes)
	I1209 04:33:15.432147 1193189 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/key.pem (1675 bytes)
	I1209 04:33:15.432195 1193189 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/ssl/certs/11442312.pem (1708 bytes)
	I1209 04:33:15.432796 1193189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 04:33:15.450868 1193189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1209 04:33:15.469951 1193189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 04:33:15.488029 1193189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1209 04:33:15.507676 1193189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1209 04:33:15.528269 1193189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1209 04:33:15.547354 1193189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 04:33:15.565510 1193189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1209 04:33:15.583378 1193189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/ssl/certs/11442312.pem --> /usr/share/ca-certificates/11442312.pem (1708 bytes)
	I1209 04:33:15.601546 1193189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 04:33:15.619028 1193189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/1144231.pem --> /usr/share/ca-certificates/1144231.pem (1338 bytes)
	I1209 04:33:15.636618 1193189 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 04:33:15.649310 1193189 ssh_runner.go:195] Run: openssl version
	I1209 04:33:15.655222 1193189 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/11442312.pem
	I1209 04:33:15.662530 1193189 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/11442312.pem /etc/ssl/certs/11442312.pem
	I1209 04:33:15.670168 1193189 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11442312.pem
	I1209 04:33:15.673829 1193189 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 04:18 /usr/share/ca-certificates/11442312.pem
	I1209 04:33:15.673881 1193189 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11442312.pem
	I1209 04:33:15.715756 1193189 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1209 04:33:15.723175 1193189 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1209 04:33:15.730584 1193189 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1209 04:33:15.738232 1193189 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 04:33:15.742081 1193189 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 04:09 /usr/share/ca-certificates/minikubeCA.pem
	I1209 04:33:15.742141 1193189 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 04:33:15.786133 1193189 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1209 04:33:15.793720 1193189 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1144231.pem
	I1209 04:33:15.801263 1193189 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1144231.pem /etc/ssl/certs/1144231.pem
	I1209 04:33:15.808357 1193189 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1144231.pem
	I1209 04:33:15.812098 1193189 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 04:18 /usr/share/ca-certificates/1144231.pem
	I1209 04:33:15.812149 1193189 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1144231.pem
	I1209 04:33:15.854297 1193189 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1209 04:33:15.861740 1193189 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 04:33:15.865303 1193189 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1209 04:33:15.905838 1193189 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1209 04:33:15.946617 1193189 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1209 04:33:15.987357 1193189 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1209 04:33:16.032170 1193189 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1209 04:33:16.075134 1193189 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1209 04:33:16.116540 1193189 kubeadm.go:401] StartCluster: {Name:functional-667319 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-667319 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 04:33:16.116615 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1209 04:33:16.116676 1193189 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 04:33:16.141721 1193189 cri.go:89] found id: ""
	I1209 04:33:16.141780 1193189 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1209 04:33:16.149204 1193189 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1209 04:33:16.149214 1193189 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1209 04:33:16.149263 1193189 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1209 04:33:16.156279 1193189 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1209 04:33:16.156783 1193189 kubeconfig.go:125] found "functional-667319" server: "https://192.168.49.2:8441"
	I1209 04:33:16.159840 1193189 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1209 04:33:16.167426 1193189 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-09 04:18:41.945308258 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-09 04:33:14.782796805 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1209 04:33:16.167445 1193189 kubeadm.go:1161] stopping kube-system containers ...
	I1209 04:33:16.167459 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I1209 04:33:16.167517 1193189 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 04:33:16.201963 1193189 cri.go:89] found id: ""
	I1209 04:33:16.202024 1193189 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1209 04:33:16.219973 1193189 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 04:33:16.227472 1193189 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5635 Dec  9 04:22 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Dec  9 04:22 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5672 Dec  9 04:22 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5588 Dec  9 04:22 /etc/kubernetes/scheduler.conf
	
	I1209 04:33:16.227532 1193189 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1209 04:33:16.234796 1193189 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1209 04:33:16.241862 1193189 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1209 04:33:16.241916 1193189 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 04:33:16.249083 1193189 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1209 04:33:16.256206 1193189 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1209 04:33:16.256261 1193189 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 04:33:16.263352 1193189 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1209 04:33:16.270362 1193189 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1209 04:33:16.270416 1193189 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 04:33:16.277706 1193189 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 04:33:16.285107 1193189 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 04:33:16.327899 1193189 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 04:33:17.810490 1193189 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.482563431s)
	I1209 04:33:17.810548 1193189 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1209 04:33:18.017563 1193189 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 04:33:18.086202 1193189 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1209 04:33:18.134715 1193189 api_server.go:52] waiting for apiserver process to appear ...
	I1209 04:33:18.134785 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:18.635261 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:19.135782 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:19.634982 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:20.134970 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:20.634979 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:21.134982 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:21.634901 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:22.135638 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:22.635624 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:23.134983 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:23.634978 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:24.135473 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:24.634966 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:25.135742 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:25.635347 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:26.134954 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:26.635380 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:27.134976 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:27.635752 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:28.135296 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:28.634924 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:29.134984 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:29.635367 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:30.135822 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:30.635721 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:31.135397 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:31.635633 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:32.134956 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:32.634993 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:33.135921 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:33.635624 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:34.134951 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:34.635593 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:35.134950 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:35.634961 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:36.134953 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:36.634877 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:37.135675 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:37.634982 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:38.135060 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:38.635809 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:39.135591 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:39.634959 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:40.135841 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:40.635611 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:41.135199 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:41.635170 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:42.134924 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:42.634948 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:43.135679 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:43.635637 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:44.134963 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:44.634963 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:45.135229 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:45.635702 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:46.134937 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:46.634881 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:47.135215 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:47.634980 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:48.134999 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:48.635744 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:49.135351 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:49.634915 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:50.135024 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:50.634852 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:51.134961 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:51.635396 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:52.135636 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:52.635513 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:53.135240 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:53.634952 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:54.135504 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:54.634869 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:55.135747 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:55.635267 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:56.135830 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:56.635547 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:57.134988 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:57.635506 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:58.135689 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:58.634992 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:59.135820 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:33:59.635373 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:00.135881 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:00.634984 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:01.135667 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:01.635758 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:02.135376 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:02.635880 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:03.135850 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:03.635021 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:04.135603 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:04.634975 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:05.135311 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:05.635291 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:06.135867 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:06.635018 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:07.135547 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:07.634967 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:08.134945 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:08.634950 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:09.135735 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:09.635308 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:10.135291 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:10.635185 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:11.134976 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:11.635433 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:12.134976 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:12.634985 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:13.134972 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:13.634991 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:14.135750 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:14.635398 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:15.135547 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:15.635003 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:16.135840 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:16.635833 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:17.135311 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:17.635902 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:18.135877 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:34:18.135980 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:34:18.160417 1193189 cri.go:89] found id: ""
	I1209 04:34:18.160431 1193189 logs.go:282] 0 containers: []
	W1209 04:34:18.160438 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:34:18.160442 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:34:18.160499 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:34:18.186014 1193189 cri.go:89] found id: ""
	I1209 04:34:18.186028 1193189 logs.go:282] 0 containers: []
	W1209 04:34:18.186035 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:34:18.186040 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:34:18.186102 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:34:18.209963 1193189 cri.go:89] found id: ""
	I1209 04:34:18.209977 1193189 logs.go:282] 0 containers: []
	W1209 04:34:18.209983 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:34:18.209989 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:34:18.210048 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:34:18.234704 1193189 cri.go:89] found id: ""
	I1209 04:34:18.234723 1193189 logs.go:282] 0 containers: []
	W1209 04:34:18.234730 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:34:18.234737 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:34:18.234794 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:34:18.260085 1193189 cri.go:89] found id: ""
	I1209 04:34:18.260100 1193189 logs.go:282] 0 containers: []
	W1209 04:34:18.260107 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:34:18.260112 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:34:18.260170 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:34:18.284959 1193189 cri.go:89] found id: ""
	I1209 04:34:18.284972 1193189 logs.go:282] 0 containers: []
	W1209 04:34:18.284978 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:34:18.284983 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:34:18.285040 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:34:18.313883 1193189 cri.go:89] found id: ""
	I1209 04:34:18.313898 1193189 logs.go:282] 0 containers: []
	W1209 04:34:18.313905 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:34:18.313912 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:34:18.313923 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:34:18.330120 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:34:18.330138 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:34:18.391936 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:34:18.383205   10728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:18.383825   10728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:18.385661   10728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:18.386372   10728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:18.388209   10728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:34:18.383205   10728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:18.383825   10728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:18.385661   10728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:18.386372   10728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:18.388209   10728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:34:18.391947 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:34:18.391957 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:34:18.457339 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:34:18.457361 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:34:18.484687 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:34:18.484702 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:34:21.045358 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:21.056486 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:34:21.056551 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:34:21.085673 1193189 cri.go:89] found id: ""
	I1209 04:34:21.085687 1193189 logs.go:282] 0 containers: []
	W1209 04:34:21.085693 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:34:21.085699 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:34:21.085758 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:34:21.111043 1193189 cri.go:89] found id: ""
	I1209 04:34:21.111056 1193189 logs.go:282] 0 containers: []
	W1209 04:34:21.111063 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:34:21.111068 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:34:21.111128 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:34:21.137031 1193189 cri.go:89] found id: ""
	I1209 04:34:21.137044 1193189 logs.go:282] 0 containers: []
	W1209 04:34:21.137051 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:34:21.137057 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:34:21.137118 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:34:21.161998 1193189 cri.go:89] found id: ""
	I1209 04:34:21.162012 1193189 logs.go:282] 0 containers: []
	W1209 04:34:21.162019 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:34:21.162024 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:34:21.162088 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:34:21.185710 1193189 cri.go:89] found id: ""
	I1209 04:34:21.185733 1193189 logs.go:282] 0 containers: []
	W1209 04:34:21.185740 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:34:21.185745 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:34:21.185805 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:34:21.209921 1193189 cri.go:89] found id: ""
	I1209 04:34:21.209934 1193189 logs.go:282] 0 containers: []
	W1209 04:34:21.209941 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:34:21.209946 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:34:21.210007 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:34:21.237263 1193189 cri.go:89] found id: ""
	I1209 04:34:21.237277 1193189 logs.go:282] 0 containers: []
	W1209 04:34:21.237284 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:34:21.237291 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:34:21.237302 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:34:21.253947 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:34:21.253964 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:34:21.323683 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:34:21.314716   10833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:21.315370   10833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:21.316976   10833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:21.317539   10833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:21.318471   10833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:34:21.314716   10833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:21.315370   10833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:21.316976   10833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:21.317539   10833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:21.318471   10833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:34:21.323693 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:34:21.323704 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:34:21.385947 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:34:21.385968 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:34:21.414692 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:34:21.414709 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:34:23.972329 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:23.982273 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:34:23.982333 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:34:24.008968 1193189 cri.go:89] found id: ""
	I1209 04:34:24.008983 1193189 logs.go:282] 0 containers: []
	W1209 04:34:24.008997 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:34:24.009002 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:34:24.009067 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:34:24.035053 1193189 cri.go:89] found id: ""
	I1209 04:34:24.035067 1193189 logs.go:282] 0 containers: []
	W1209 04:34:24.035074 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:34:24.035082 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:34:24.035155 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:34:24.060177 1193189 cri.go:89] found id: ""
	I1209 04:34:24.060202 1193189 logs.go:282] 0 containers: []
	W1209 04:34:24.060210 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:34:24.060215 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:34:24.060278 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:34:24.087352 1193189 cri.go:89] found id: ""
	I1209 04:34:24.087365 1193189 logs.go:282] 0 containers: []
	W1209 04:34:24.087372 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:34:24.087377 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:34:24.087436 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:34:24.112436 1193189 cri.go:89] found id: ""
	I1209 04:34:24.112450 1193189 logs.go:282] 0 containers: []
	W1209 04:34:24.112457 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:34:24.112463 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:34:24.112523 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:34:24.138043 1193189 cri.go:89] found id: ""
	I1209 04:34:24.138057 1193189 logs.go:282] 0 containers: []
	W1209 04:34:24.138063 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:34:24.138068 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:34:24.138127 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:34:24.162473 1193189 cri.go:89] found id: ""
	I1209 04:34:24.162486 1193189 logs.go:282] 0 containers: []
	W1209 04:34:24.162493 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:34:24.162501 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:34:24.162512 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:34:24.218725 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:34:24.218750 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:34:24.237014 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:34:24.237032 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:34:24.301761 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:34:24.293159   10942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:24.293842   10942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:24.295579   10942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:24.296219   10942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:24.297932   10942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:34:24.293159   10942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:24.293842   10942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:24.295579   10942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:24.296219   10942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:24.297932   10942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:34:24.301771 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:34:24.301782 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:34:24.364794 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:34:24.364819 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:34:26.896098 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:26.905998 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:34:26.906059 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:34:26.937370 1193189 cri.go:89] found id: ""
	I1209 04:34:26.937384 1193189 logs.go:282] 0 containers: []
	W1209 04:34:26.937390 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:34:26.937395 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:34:26.937455 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:34:26.961993 1193189 cri.go:89] found id: ""
	I1209 04:34:26.962006 1193189 logs.go:282] 0 containers: []
	W1209 04:34:26.962013 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:34:26.962018 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:34:26.962075 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:34:26.991456 1193189 cri.go:89] found id: ""
	I1209 04:34:26.991470 1193189 logs.go:282] 0 containers: []
	W1209 04:34:26.991476 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:34:26.991495 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:34:26.991554 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:34:27.018891 1193189 cri.go:89] found id: ""
	I1209 04:34:27.018904 1193189 logs.go:282] 0 containers: []
	W1209 04:34:27.018911 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:34:27.018916 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:34:27.018974 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:34:27.043050 1193189 cri.go:89] found id: ""
	I1209 04:34:27.043064 1193189 logs.go:282] 0 containers: []
	W1209 04:34:27.043070 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:34:27.043083 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:34:27.043141 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:34:27.069538 1193189 cri.go:89] found id: ""
	I1209 04:34:27.069553 1193189 logs.go:282] 0 containers: []
	W1209 04:34:27.069559 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:34:27.069564 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:34:27.069624 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:34:27.092560 1193189 cri.go:89] found id: ""
	I1209 04:34:27.092573 1193189 logs.go:282] 0 containers: []
	W1209 04:34:27.092580 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:34:27.092588 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:34:27.092597 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:34:27.149471 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:34:27.149509 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:34:27.166396 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:34:27.166413 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:34:27.233147 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:34:27.224772   11045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:27.225484   11045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:27.227169   11045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:27.227648   11045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:27.229172   11045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:34:27.224772   11045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:27.225484   11045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:27.227169   11045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:27.227648   11045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:27.229172   11045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:34:27.233160 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:34:27.233171 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:34:27.300582 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:34:27.300607 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:34:29.831076 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:29.841031 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:34:29.841110 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:34:29.870054 1193189 cri.go:89] found id: ""
	I1209 04:34:29.870068 1193189 logs.go:282] 0 containers: []
	W1209 04:34:29.870074 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:34:29.870080 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:34:29.870148 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:34:29.893884 1193189 cri.go:89] found id: ""
	I1209 04:34:29.893897 1193189 logs.go:282] 0 containers: []
	W1209 04:34:29.893904 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:34:29.893909 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:34:29.893984 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:34:29.917545 1193189 cri.go:89] found id: ""
	I1209 04:34:29.917559 1193189 logs.go:282] 0 containers: []
	W1209 04:34:29.917565 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:34:29.917570 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:34:29.917636 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:34:29.948707 1193189 cri.go:89] found id: ""
	I1209 04:34:29.948721 1193189 logs.go:282] 0 containers: []
	W1209 04:34:29.948727 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:34:29.948733 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:34:29.948792 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:34:29.988977 1193189 cri.go:89] found id: ""
	I1209 04:34:29.988990 1193189 logs.go:282] 0 containers: []
	W1209 04:34:29.988997 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:34:29.989003 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:34:29.989058 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:34:30.029614 1193189 cri.go:89] found id: ""
	I1209 04:34:30.029653 1193189 logs.go:282] 0 containers: []
	W1209 04:34:30.029660 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:34:30.029666 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:34:30.029747 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:34:30.057862 1193189 cri.go:89] found id: ""
	I1209 04:34:30.057877 1193189 logs.go:282] 0 containers: []
	W1209 04:34:30.057884 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:34:30.057892 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:34:30.057903 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:34:30.125643 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:34:30.125665 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:34:30.154365 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:34:30.154393 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:34:30.218342 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:34:30.218370 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:34:30.235415 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:34:30.235438 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:34:30.300328 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:34:30.292511   11158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:30.293159   11158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:30.294635   11158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:30.295041   11158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:30.296473   11158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:34:30.292511   11158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:30.293159   11158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:30.294635   11158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:30.295041   11158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:30.296473   11158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:34:32.800607 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:32.810690 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:34:32.810752 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:34:32.837030 1193189 cri.go:89] found id: ""
	I1209 04:34:32.837045 1193189 logs.go:282] 0 containers: []
	W1209 04:34:32.837052 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:34:32.837058 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:34:32.837136 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:34:32.863207 1193189 cri.go:89] found id: ""
	I1209 04:34:32.863221 1193189 logs.go:282] 0 containers: []
	W1209 04:34:32.863227 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:34:32.863242 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:34:32.863302 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:34:32.888280 1193189 cri.go:89] found id: ""
	I1209 04:34:32.888294 1193189 logs.go:282] 0 containers: []
	W1209 04:34:32.888301 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:34:32.888306 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:34:32.888365 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:34:32.912361 1193189 cri.go:89] found id: ""
	I1209 04:34:32.912375 1193189 logs.go:282] 0 containers: []
	W1209 04:34:32.912381 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:34:32.912387 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:34:32.912447 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:34:32.944341 1193189 cri.go:89] found id: ""
	I1209 04:34:32.944355 1193189 logs.go:282] 0 containers: []
	W1209 04:34:32.944363 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:34:32.944368 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:34:32.944427 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:34:32.974577 1193189 cri.go:89] found id: ""
	I1209 04:34:32.974592 1193189 logs.go:282] 0 containers: []
	W1209 04:34:32.974599 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:34:32.974604 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:34:32.974667 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:34:33.007167 1193189 cri.go:89] found id: ""
	I1209 04:34:33.007182 1193189 logs.go:282] 0 containers: []
	W1209 04:34:33.007188 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:34:33.007197 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:34:33.007208 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:34:33.072653 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:34:33.064421   11240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:33.065259   11240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:33.066881   11240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:33.067179   11240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:33.068654   11240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:34:33.064421   11240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:33.065259   11240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:33.066881   11240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:33.067179   11240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:33.068654   11240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:34:33.072662 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:34:33.072674 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:34:33.135053 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:34:33.135075 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:34:33.166357 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:34:33.166374 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:34:33.223824 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:34:33.223844 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:34:35.741231 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:35.751318 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:34:35.751378 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:34:35.776735 1193189 cri.go:89] found id: ""
	I1209 04:34:35.776749 1193189 logs.go:282] 0 containers: []
	W1209 04:34:35.776755 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:34:35.776760 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:34:35.776825 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:34:35.805165 1193189 cri.go:89] found id: ""
	I1209 04:34:35.805178 1193189 logs.go:282] 0 containers: []
	W1209 04:34:35.805185 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:34:35.805190 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:34:35.805255 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:34:35.834579 1193189 cri.go:89] found id: ""
	I1209 04:34:35.834592 1193189 logs.go:282] 0 containers: []
	W1209 04:34:35.834599 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:34:35.834604 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:34:35.834668 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:34:35.864666 1193189 cri.go:89] found id: ""
	I1209 04:34:35.864680 1193189 logs.go:282] 0 containers: []
	W1209 04:34:35.864687 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:34:35.864692 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:34:35.864753 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:34:35.888987 1193189 cri.go:89] found id: ""
	I1209 04:34:35.889001 1193189 logs.go:282] 0 containers: []
	W1209 04:34:35.889008 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:34:35.889013 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:34:35.889073 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:34:35.913760 1193189 cri.go:89] found id: ""
	I1209 04:34:35.913774 1193189 logs.go:282] 0 containers: []
	W1209 04:34:35.913781 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:34:35.913787 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:34:35.913848 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:34:35.953491 1193189 cri.go:89] found id: ""
	I1209 04:34:35.953504 1193189 logs.go:282] 0 containers: []
	W1209 04:34:35.953511 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:34:35.953519 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:34:35.953529 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:34:36.017926 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:34:36.017947 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:34:36.036525 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:34:36.036542 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:34:36.100279 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:34:36.091110   11352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:36.091738   11352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:36.093351   11352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:36.093993   11352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:36.095716   11352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:34:36.091110   11352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:36.091738   11352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:36.093351   11352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:36.093993   11352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:36.095716   11352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:34:36.100289 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:34:36.100302 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:34:36.165176 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:34:36.165198 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:34:38.692274 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:38.702150 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:34:38.702209 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:34:38.727703 1193189 cri.go:89] found id: ""
	I1209 04:34:38.727718 1193189 logs.go:282] 0 containers: []
	W1209 04:34:38.727725 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:34:38.727739 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:34:38.727802 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:34:38.752490 1193189 cri.go:89] found id: ""
	I1209 04:34:38.752509 1193189 logs.go:282] 0 containers: []
	W1209 04:34:38.752515 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:34:38.752521 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:34:38.752582 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:34:38.776648 1193189 cri.go:89] found id: ""
	I1209 04:34:38.776662 1193189 logs.go:282] 0 containers: []
	W1209 04:34:38.776668 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:34:38.776676 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:34:38.776735 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:34:38.801762 1193189 cri.go:89] found id: ""
	I1209 04:34:38.801775 1193189 logs.go:282] 0 containers: []
	W1209 04:34:38.801782 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:34:38.801788 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:34:38.801849 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:34:38.825649 1193189 cri.go:89] found id: ""
	I1209 04:34:38.825662 1193189 logs.go:282] 0 containers: []
	W1209 04:34:38.825668 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:34:38.825673 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:34:38.825734 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:34:38.850253 1193189 cri.go:89] found id: ""
	I1209 04:34:38.850268 1193189 logs.go:282] 0 containers: []
	W1209 04:34:38.850274 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:34:38.850280 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:34:38.850342 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:34:38.878018 1193189 cri.go:89] found id: ""
	I1209 04:34:38.878032 1193189 logs.go:282] 0 containers: []
	W1209 04:34:38.878039 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:34:38.878046 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:34:38.878056 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:34:38.937715 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:34:38.937734 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:34:38.956265 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:34:38.956289 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:34:39.027118 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:34:39.019252   11455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:39.020066   11455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:39.021610   11455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:39.021907   11455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:39.023382   11455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:34:39.019252   11455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:39.020066   11455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:39.021610   11455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:39.021907   11455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:39.023382   11455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:34:39.027128 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:34:39.027140 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:34:39.093921 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:34:39.093942 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:34:41.623796 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:41.634102 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:34:41.634167 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:34:41.661702 1193189 cri.go:89] found id: ""
	I1209 04:34:41.661716 1193189 logs.go:282] 0 containers: []
	W1209 04:34:41.661723 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:34:41.661728 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:34:41.661793 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:34:41.686941 1193189 cri.go:89] found id: ""
	I1209 04:34:41.686955 1193189 logs.go:282] 0 containers: []
	W1209 04:34:41.686962 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:34:41.686967 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:34:41.687026 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:34:41.716790 1193189 cri.go:89] found id: ""
	I1209 04:34:41.716805 1193189 logs.go:282] 0 containers: []
	W1209 04:34:41.716813 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:34:41.716818 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:34:41.716881 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:34:41.741120 1193189 cri.go:89] found id: ""
	I1209 04:34:41.741135 1193189 logs.go:282] 0 containers: []
	W1209 04:34:41.741141 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:34:41.741147 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:34:41.741206 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:34:41.765600 1193189 cri.go:89] found id: ""
	I1209 04:34:41.765614 1193189 logs.go:282] 0 containers: []
	W1209 04:34:41.765622 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:34:41.765627 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:34:41.765687 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:34:41.789956 1193189 cri.go:89] found id: ""
	I1209 04:34:41.789971 1193189 logs.go:282] 0 containers: []
	W1209 04:34:41.789978 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:34:41.789983 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:34:41.790047 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:34:41.813854 1193189 cri.go:89] found id: ""
	I1209 04:34:41.813868 1193189 logs.go:282] 0 containers: []
	W1209 04:34:41.813875 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:34:41.813883 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:34:41.813893 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:34:41.869283 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:34:41.869303 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:34:41.886263 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:34:41.886279 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:34:41.966783 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:34:41.957901   11556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:41.958580   11556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:41.960469   11556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:41.961191   11556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:41.962837   11556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:34:41.957901   11556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:41.958580   11556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:41.960469   11556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:41.961191   11556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:41.962837   11556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:34:41.966793 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:34:41.966810 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:34:42.035421 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:34:42.035443 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:34:44.567350 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:44.577592 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:34:44.577656 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:34:44.607032 1193189 cri.go:89] found id: ""
	I1209 04:34:44.607047 1193189 logs.go:282] 0 containers: []
	W1209 04:34:44.607054 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:34:44.607059 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:34:44.607119 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:34:44.632031 1193189 cri.go:89] found id: ""
	I1209 04:34:44.632045 1193189 logs.go:282] 0 containers: []
	W1209 04:34:44.632052 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:34:44.632057 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:34:44.632116 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:34:44.656224 1193189 cri.go:89] found id: ""
	I1209 04:34:44.656237 1193189 logs.go:282] 0 containers: []
	W1209 04:34:44.656244 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:34:44.656249 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:34:44.656308 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:34:44.680302 1193189 cri.go:89] found id: ""
	I1209 04:34:44.680317 1193189 logs.go:282] 0 containers: []
	W1209 04:34:44.680323 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:34:44.680329 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:34:44.680389 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:34:44.705286 1193189 cri.go:89] found id: ""
	I1209 04:34:44.705301 1193189 logs.go:282] 0 containers: []
	W1209 04:34:44.705308 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:34:44.705319 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:34:44.705380 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:34:44.729365 1193189 cri.go:89] found id: ""
	I1209 04:34:44.729378 1193189 logs.go:282] 0 containers: []
	W1209 04:34:44.729385 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:34:44.729391 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:34:44.729452 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:34:44.753588 1193189 cri.go:89] found id: ""
	I1209 04:34:44.753601 1193189 logs.go:282] 0 containers: []
	W1209 04:34:44.753608 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:34:44.753616 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:34:44.753626 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:34:44.809786 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:34:44.809806 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:34:44.827005 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:34:44.827023 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:34:44.888308 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:34:44.880071   11658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:44.880850   11658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:44.882536   11658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:44.882961   11658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:44.884478   11658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:34:44.880071   11658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:44.880850   11658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:44.882536   11658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:44.882961   11658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:44.884478   11658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:34:44.888318 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:34:44.888329 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:34:44.955975 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:34:44.955994 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:34:47.492101 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:47.502461 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:34:47.502521 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:34:47.527075 1193189 cri.go:89] found id: ""
	I1209 04:34:47.527089 1193189 logs.go:282] 0 containers: []
	W1209 04:34:47.527095 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:34:47.527109 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:34:47.527168 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:34:47.552346 1193189 cri.go:89] found id: ""
	I1209 04:34:47.552361 1193189 logs.go:282] 0 containers: []
	W1209 04:34:47.552368 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:34:47.552372 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:34:47.552439 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:34:47.577991 1193189 cri.go:89] found id: ""
	I1209 04:34:47.578005 1193189 logs.go:282] 0 containers: []
	W1209 04:34:47.578011 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:34:47.578017 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:34:47.578077 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:34:47.601711 1193189 cri.go:89] found id: ""
	I1209 04:34:47.601726 1193189 logs.go:282] 0 containers: []
	W1209 04:34:47.601733 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:34:47.601738 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:34:47.601799 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:34:47.626261 1193189 cri.go:89] found id: ""
	I1209 04:34:47.626274 1193189 logs.go:282] 0 containers: []
	W1209 04:34:47.626281 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:34:47.626287 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:34:47.626346 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:34:47.650195 1193189 cri.go:89] found id: ""
	I1209 04:34:47.650209 1193189 logs.go:282] 0 containers: []
	W1209 04:34:47.650215 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:34:47.650222 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:34:47.650289 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:34:47.674818 1193189 cri.go:89] found id: ""
	I1209 04:34:47.674844 1193189 logs.go:282] 0 containers: []
	W1209 04:34:47.674851 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:34:47.674858 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:34:47.674868 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:34:47.730669 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:34:47.730689 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:34:47.747530 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:34:47.747553 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:34:47.809873 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:34:47.800913   11766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:47.801626   11766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:47.803387   11766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:47.804067   11766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:47.805583   11766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:34:47.800913   11766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:47.801626   11766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:47.803387   11766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:47.804067   11766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:47.805583   11766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:34:47.809893 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:34:47.809905 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:34:47.871413 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:34:47.871433 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:34:50.398661 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:50.408687 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:34:50.408759 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:34:50.432488 1193189 cri.go:89] found id: ""
	I1209 04:34:50.432507 1193189 logs.go:282] 0 containers: []
	W1209 04:34:50.432514 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:34:50.432520 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:34:50.432581 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:34:50.456531 1193189 cri.go:89] found id: ""
	I1209 04:34:50.456545 1193189 logs.go:282] 0 containers: []
	W1209 04:34:50.456552 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:34:50.456557 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:34:50.456617 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:34:50.484856 1193189 cri.go:89] found id: ""
	I1209 04:34:50.484871 1193189 logs.go:282] 0 containers: []
	W1209 04:34:50.484878 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:34:50.484884 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:34:50.484946 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:34:50.510277 1193189 cri.go:89] found id: ""
	I1209 04:34:50.510291 1193189 logs.go:282] 0 containers: []
	W1209 04:34:50.510297 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:34:50.510302 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:34:50.510361 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:34:50.533718 1193189 cri.go:89] found id: ""
	I1209 04:34:50.533744 1193189 logs.go:282] 0 containers: []
	W1209 04:34:50.533751 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:34:50.533756 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:34:50.533823 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:34:50.556925 1193189 cri.go:89] found id: ""
	I1209 04:34:50.556939 1193189 logs.go:282] 0 containers: []
	W1209 04:34:50.556945 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:34:50.556951 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:34:50.557010 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:34:50.581553 1193189 cri.go:89] found id: ""
	I1209 04:34:50.581567 1193189 logs.go:282] 0 containers: []
	W1209 04:34:50.581574 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:34:50.581582 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:34:50.581592 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:34:50.640077 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:34:50.640096 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:34:50.657419 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:34:50.657435 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:34:50.717755 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:34:50.710080   11873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:50.710723   11873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:50.711869   11873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:50.712446   11873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:50.713899   11873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:34:50.710080   11873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:50.710723   11873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:50.711869   11873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:50.712446   11873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:50.713899   11873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:34:50.717765 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:34:50.717775 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:34:50.784823 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:34:50.784842 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:34:53.324166 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:53.333904 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:34:53.333963 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:34:53.357773 1193189 cri.go:89] found id: ""
	I1209 04:34:53.357787 1193189 logs.go:282] 0 containers: []
	W1209 04:34:53.357794 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:34:53.357799 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:34:53.357869 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:34:53.381476 1193189 cri.go:89] found id: ""
	I1209 04:34:53.381490 1193189 logs.go:282] 0 containers: []
	W1209 04:34:53.381498 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:34:53.381504 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:34:53.381563 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:34:53.404639 1193189 cri.go:89] found id: ""
	I1209 04:34:53.404653 1193189 logs.go:282] 0 containers: []
	W1209 04:34:53.404671 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:34:53.404677 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:34:53.404737 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:34:53.428572 1193189 cri.go:89] found id: ""
	I1209 04:34:53.428586 1193189 logs.go:282] 0 containers: []
	W1209 04:34:53.428593 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:34:53.428598 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:34:53.428656 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:34:53.453240 1193189 cri.go:89] found id: ""
	I1209 04:34:53.453254 1193189 logs.go:282] 0 containers: []
	W1209 04:34:53.453261 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:34:53.453266 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:34:53.453325 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:34:53.478715 1193189 cri.go:89] found id: ""
	I1209 04:34:53.478728 1193189 logs.go:282] 0 containers: []
	W1209 04:34:53.478735 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:34:53.478740 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:34:53.478798 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:34:53.503483 1193189 cri.go:89] found id: ""
	I1209 04:34:53.503497 1193189 logs.go:282] 0 containers: []
	W1209 04:34:53.503503 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:34:53.503511 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:34:53.503522 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:34:53.569898 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:34:53.560949   11970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:53.561857   11970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:53.563361   11970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:53.563947   11970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:53.565706   11970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:34:53.560949   11970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:53.561857   11970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:53.563361   11970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:53.563947   11970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:53.565706   11970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:34:53.569907 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:34:53.569918 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:34:53.631345 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:34:53.631366 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:34:53.657935 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:34:53.657951 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:34:53.717129 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:34:53.717148 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:34:56.235149 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:56.245451 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:34:56.245512 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:34:56.273858 1193189 cri.go:89] found id: ""
	I1209 04:34:56.273872 1193189 logs.go:282] 0 containers: []
	W1209 04:34:56.273879 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:34:56.273884 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:34:56.273946 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:34:56.299990 1193189 cri.go:89] found id: ""
	I1209 04:34:56.300004 1193189 logs.go:282] 0 containers: []
	W1209 04:34:56.300036 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:34:56.300042 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:34:56.300109 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:34:56.325952 1193189 cri.go:89] found id: ""
	I1209 04:34:56.325965 1193189 logs.go:282] 0 containers: []
	W1209 04:34:56.325972 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:34:56.325977 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:34:56.326044 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:34:56.349999 1193189 cri.go:89] found id: ""
	I1209 04:34:56.350013 1193189 logs.go:282] 0 containers: []
	W1209 04:34:56.350020 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:34:56.350025 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:34:56.350088 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:34:56.376083 1193189 cri.go:89] found id: ""
	I1209 04:34:56.376097 1193189 logs.go:282] 0 containers: []
	W1209 04:34:56.376104 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:34:56.376109 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:34:56.376177 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:34:56.400259 1193189 cri.go:89] found id: ""
	I1209 04:34:56.400273 1193189 logs.go:282] 0 containers: []
	W1209 04:34:56.400280 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:34:56.400293 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:34:56.400352 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:34:56.424757 1193189 cri.go:89] found id: ""
	I1209 04:34:56.424777 1193189 logs.go:282] 0 containers: []
	W1209 04:34:56.424784 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:34:56.424792 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:34:56.424802 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:34:56.453832 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:34:56.453849 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:34:56.512444 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:34:56.512463 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:34:56.531303 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:34:56.531322 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:34:56.595582 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:34:56.587456   12094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:56.588255   12094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:56.589902   12094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:56.590193   12094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:56.591722   12094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:34:56.587456   12094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:56.588255   12094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:56.589902   12094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:56.590193   12094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:56.591722   12094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:34:56.595592 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:34:56.595602 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:34:59.163281 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:34:59.173117 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:34:59.173176 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:34:59.206232 1193189 cri.go:89] found id: ""
	I1209 04:34:59.206246 1193189 logs.go:282] 0 containers: []
	W1209 04:34:59.206253 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:34:59.206257 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:34:59.206321 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:34:59.239889 1193189 cri.go:89] found id: ""
	I1209 04:34:59.239903 1193189 logs.go:282] 0 containers: []
	W1209 04:34:59.239910 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:34:59.239915 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:34:59.239977 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:34:59.268932 1193189 cri.go:89] found id: ""
	I1209 04:34:59.268946 1193189 logs.go:282] 0 containers: []
	W1209 04:34:59.268953 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:34:59.268958 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:34:59.269019 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:34:59.293191 1193189 cri.go:89] found id: ""
	I1209 04:34:59.293205 1193189 logs.go:282] 0 containers: []
	W1209 04:34:59.293211 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:34:59.293217 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:34:59.293279 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:34:59.317923 1193189 cri.go:89] found id: ""
	I1209 04:34:59.317936 1193189 logs.go:282] 0 containers: []
	W1209 04:34:59.317943 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:34:59.317948 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:34:59.318009 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:34:59.342336 1193189 cri.go:89] found id: ""
	I1209 04:34:59.342350 1193189 logs.go:282] 0 containers: []
	W1209 04:34:59.342356 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:34:59.342361 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:34:59.342419 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:34:59.366502 1193189 cri.go:89] found id: ""
	I1209 04:34:59.366517 1193189 logs.go:282] 0 containers: []
	W1209 04:34:59.366524 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:34:59.366532 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:34:59.366542 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:34:59.422133 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:34:59.422153 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:34:59.439160 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:34:59.439187 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:34:59.506261 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:34:59.497371   12189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:59.498039   12189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:59.499661   12189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:59.500189   12189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:59.501847   12189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:34:59.497371   12189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:59.498039   12189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:59.499661   12189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:59.500189   12189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:34:59.501847   12189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:34:59.506271 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:34:59.506282 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:34:59.575415 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:34:59.575436 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:35:02.103491 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:35:02.113633 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:35:02.113694 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:35:02.144619 1193189 cri.go:89] found id: ""
	I1209 04:35:02.144633 1193189 logs.go:282] 0 containers: []
	W1209 04:35:02.144640 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:35:02.144646 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:35:02.144705 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:35:02.170344 1193189 cri.go:89] found id: ""
	I1209 04:35:02.170361 1193189 logs.go:282] 0 containers: []
	W1209 04:35:02.170368 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:35:02.170373 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:35:02.170433 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:35:02.197667 1193189 cri.go:89] found id: ""
	I1209 04:35:02.197691 1193189 logs.go:282] 0 containers: []
	W1209 04:35:02.197699 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:35:02.197704 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:35:02.197776 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:35:02.234579 1193189 cri.go:89] found id: ""
	I1209 04:35:02.234593 1193189 logs.go:282] 0 containers: []
	W1209 04:35:02.234600 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:35:02.234605 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:35:02.234676 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:35:02.261734 1193189 cri.go:89] found id: ""
	I1209 04:35:02.261750 1193189 logs.go:282] 0 containers: []
	W1209 04:35:02.261757 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:35:02.261763 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:35:02.261840 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:35:02.287117 1193189 cri.go:89] found id: ""
	I1209 04:35:02.287132 1193189 logs.go:282] 0 containers: []
	W1209 04:35:02.287149 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:35:02.287155 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:35:02.287215 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:35:02.316821 1193189 cri.go:89] found id: ""
	I1209 04:35:02.316841 1193189 logs.go:282] 0 containers: []
	W1209 04:35:02.316887 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:35:02.316894 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:35:02.316908 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:35:02.374344 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:35:02.374364 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:35:02.391657 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:35:02.391675 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:35:02.456609 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:35:02.448842   12295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:02.449370   12295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:02.450865   12295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:02.451343   12295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:02.452897   12295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:35:02.448842   12295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:02.449370   12295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:02.450865   12295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:02.451343   12295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:02.452897   12295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:35:02.456619 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:35:02.456630 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:35:02.522522 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:35:02.522544 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:35:05.052204 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:35:05.062711 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:35:05.062783 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:35:05.088683 1193189 cri.go:89] found id: ""
	I1209 04:35:05.088699 1193189 logs.go:282] 0 containers: []
	W1209 04:35:05.088708 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:35:05.088714 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:35:05.088786 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:35:05.114558 1193189 cri.go:89] found id: ""
	I1209 04:35:05.114573 1193189 logs.go:282] 0 containers: []
	W1209 04:35:05.114580 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:35:05.114585 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:35:05.114647 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:35:05.139679 1193189 cri.go:89] found id: ""
	I1209 04:35:05.139694 1193189 logs.go:282] 0 containers: []
	W1209 04:35:05.139701 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:35:05.139713 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:35:05.139785 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:35:05.165102 1193189 cri.go:89] found id: ""
	I1209 04:35:05.165116 1193189 logs.go:282] 0 containers: []
	W1209 04:35:05.165123 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:35:05.165129 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:35:05.165200 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:35:05.193330 1193189 cri.go:89] found id: ""
	I1209 04:35:05.193354 1193189 logs.go:282] 0 containers: []
	W1209 04:35:05.193361 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:35:05.193366 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:35:05.193434 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:35:05.225572 1193189 cri.go:89] found id: ""
	I1209 04:35:05.225602 1193189 logs.go:282] 0 containers: []
	W1209 04:35:05.225610 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:35:05.225615 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:35:05.225684 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:35:05.253111 1193189 cri.go:89] found id: ""
	I1209 04:35:05.253125 1193189 logs.go:282] 0 containers: []
	W1209 04:35:05.253134 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:35:05.253142 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:35:05.253151 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:35:05.311870 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:35:05.311891 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:35:05.329165 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:35:05.329181 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:35:05.403755 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:35:05.395160   12402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:05.395840   12402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:05.397743   12402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:05.398247   12402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:05.399758   12402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:35:05.395160   12402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:05.395840   12402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:05.397743   12402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:05.398247   12402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:05.399758   12402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:35:05.403765 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:35:05.403778 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:35:05.466140 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:35:05.466163 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:35:08.001482 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:35:08.012555 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:35:08.012621 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:35:08.038489 1193189 cri.go:89] found id: ""
	I1209 04:35:08.038502 1193189 logs.go:282] 0 containers: []
	W1209 04:35:08.038510 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:35:08.038515 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:35:08.038577 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:35:08.063791 1193189 cri.go:89] found id: ""
	I1209 04:35:08.063806 1193189 logs.go:282] 0 containers: []
	W1209 04:35:08.063813 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:35:08.063819 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:35:08.063883 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:35:08.088918 1193189 cri.go:89] found id: ""
	I1209 04:35:08.088933 1193189 logs.go:282] 0 containers: []
	W1209 04:35:08.088940 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:35:08.088945 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:35:08.089006 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:35:08.113601 1193189 cri.go:89] found id: ""
	I1209 04:35:08.113614 1193189 logs.go:282] 0 containers: []
	W1209 04:35:08.113623 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:35:08.113628 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:35:08.113684 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:35:08.136899 1193189 cri.go:89] found id: ""
	I1209 04:35:08.136912 1193189 logs.go:282] 0 containers: []
	W1209 04:35:08.136924 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:35:08.136929 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:35:08.136988 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:35:08.160001 1193189 cri.go:89] found id: ""
	I1209 04:35:08.160050 1193189 logs.go:282] 0 containers: []
	W1209 04:35:08.160057 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:35:08.160062 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:35:08.160119 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:35:08.193362 1193189 cri.go:89] found id: ""
	I1209 04:35:08.193375 1193189 logs.go:282] 0 containers: []
	W1209 04:35:08.193382 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:35:08.193390 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:35:08.193400 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:35:08.255924 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:35:08.255942 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:35:08.274860 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:35:08.274876 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:35:08.341852 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:35:08.333782   12503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:08.334529   12503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:08.336277   12503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:08.336676   12503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:08.338082   12503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:35:08.333782   12503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:08.334529   12503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:08.336277   12503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:08.336676   12503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:08.338082   12503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:35:08.341863 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:35:08.341875 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:35:08.402199 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:35:08.402217 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:35:10.929478 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:35:10.939723 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:35:10.939784 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:35:10.964690 1193189 cri.go:89] found id: ""
	I1209 04:35:10.964704 1193189 logs.go:282] 0 containers: []
	W1209 04:35:10.964711 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:35:10.964716 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:35:10.964796 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:35:10.993239 1193189 cri.go:89] found id: ""
	I1209 04:35:10.993253 1193189 logs.go:282] 0 containers: []
	W1209 04:35:10.993260 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:35:10.993265 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:35:10.993323 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:35:11.019779 1193189 cri.go:89] found id: ""
	I1209 04:35:11.019793 1193189 logs.go:282] 0 containers: []
	W1209 04:35:11.019800 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:35:11.019805 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:35:11.019867 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:35:11.044082 1193189 cri.go:89] found id: ""
	I1209 04:35:11.044095 1193189 logs.go:282] 0 containers: []
	W1209 04:35:11.044104 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:35:11.044109 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:35:11.044170 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:35:11.067732 1193189 cri.go:89] found id: ""
	I1209 04:35:11.067746 1193189 logs.go:282] 0 containers: []
	W1209 04:35:11.067753 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:35:11.067758 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:35:11.067827 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:35:11.094131 1193189 cri.go:89] found id: ""
	I1209 04:35:11.094145 1193189 logs.go:282] 0 containers: []
	W1209 04:35:11.094152 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:35:11.094157 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:35:11.094217 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:35:11.120246 1193189 cri.go:89] found id: ""
	I1209 04:35:11.120261 1193189 logs.go:282] 0 containers: []
	W1209 04:35:11.120269 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:35:11.120277 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:35:11.120288 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:35:11.188699 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:35:11.188719 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:35:11.220249 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:35:11.220272 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:35:11.281813 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:35:11.281834 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:35:11.299608 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:35:11.299624 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:35:11.364974 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:35:11.356357   12622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:11.357044   12622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:11.358808   12622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:11.359431   12622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:11.361160   12622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:35:11.356357   12622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:11.357044   12622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:11.358808   12622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:11.359431   12622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:11.361160   12622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:35:13.865252 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:35:13.875906 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:35:13.875966 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:35:13.901925 1193189 cri.go:89] found id: ""
	I1209 04:35:13.901941 1193189 logs.go:282] 0 containers: []
	W1209 04:35:13.901947 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:35:13.901953 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:35:13.902023 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:35:13.929808 1193189 cri.go:89] found id: ""
	I1209 04:35:13.929823 1193189 logs.go:282] 0 containers: []
	W1209 04:35:13.929830 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:35:13.929835 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:35:13.929896 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:35:13.955030 1193189 cri.go:89] found id: ""
	I1209 04:35:13.955045 1193189 logs.go:282] 0 containers: []
	W1209 04:35:13.955051 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:35:13.955056 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:35:13.955114 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:35:13.979829 1193189 cri.go:89] found id: ""
	I1209 04:35:13.979843 1193189 logs.go:282] 0 containers: []
	W1209 04:35:13.979849 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:35:13.979854 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:35:13.979918 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:35:14.007254 1193189 cri.go:89] found id: ""
	I1209 04:35:14.007269 1193189 logs.go:282] 0 containers: []
	W1209 04:35:14.007275 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:35:14.007281 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:35:14.007345 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:35:14.032915 1193189 cri.go:89] found id: ""
	I1209 04:35:14.032929 1193189 logs.go:282] 0 containers: []
	W1209 04:35:14.032936 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:35:14.032941 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:35:14.032999 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:35:14.061801 1193189 cri.go:89] found id: ""
	I1209 04:35:14.061826 1193189 logs.go:282] 0 containers: []
	W1209 04:35:14.061834 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:35:14.061842 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:35:14.061853 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:35:14.125545 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:35:14.117510   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:14.118249   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:14.119815   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:14.120178   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:14.121732   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:35:14.117510   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:14.118249   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:14.119815   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:14.120178   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:14.121732   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:35:14.125555 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:35:14.125569 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:35:14.192586 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:35:14.192605 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:35:14.223400 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:35:14.223417 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:35:14.284525 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:35:14.284545 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:35:16.802913 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:35:16.812669 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:35:16.812730 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:35:16.836304 1193189 cri.go:89] found id: ""
	I1209 04:35:16.836318 1193189 logs.go:282] 0 containers: []
	W1209 04:35:16.836324 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:35:16.836329 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:35:16.836386 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:35:16.861382 1193189 cri.go:89] found id: ""
	I1209 04:35:16.861396 1193189 logs.go:282] 0 containers: []
	W1209 04:35:16.861403 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:35:16.861407 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:35:16.861467 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:35:16.884827 1193189 cri.go:89] found id: ""
	I1209 04:35:16.884841 1193189 logs.go:282] 0 containers: []
	W1209 04:35:16.884848 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:35:16.884853 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:35:16.884913 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:35:16.907933 1193189 cri.go:89] found id: ""
	I1209 04:35:16.907946 1193189 logs.go:282] 0 containers: []
	W1209 04:35:16.907953 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:35:16.907959 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:35:16.908028 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:35:16.933329 1193189 cri.go:89] found id: ""
	I1209 04:35:16.933344 1193189 logs.go:282] 0 containers: []
	W1209 04:35:16.933350 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:35:16.933355 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:35:16.933418 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:35:16.957725 1193189 cri.go:89] found id: ""
	I1209 04:35:16.957739 1193189 logs.go:282] 0 containers: []
	W1209 04:35:16.957745 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:35:16.957751 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:35:16.957807 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:35:16.981209 1193189 cri.go:89] found id: ""
	I1209 04:35:16.981223 1193189 logs.go:282] 0 containers: []
	W1209 04:35:16.981231 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:35:16.981240 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:35:16.981249 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:35:17.039472 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:35:17.039491 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:35:17.056497 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:35:17.056514 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:35:17.119231 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:35:17.111277   12811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:17.111948   12811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:17.113585   12811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:17.114023   12811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:17.115511   12811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:35:17.111277   12811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:17.111948   12811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:17.113585   12811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:17.114023   12811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:17.115511   12811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:35:17.119240 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:35:17.119251 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:35:17.181494 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:35:17.181513 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:35:19.709396 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:35:19.719323 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:35:19.719388 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:35:19.743245 1193189 cri.go:89] found id: ""
	I1209 04:35:19.743259 1193189 logs.go:282] 0 containers: []
	W1209 04:35:19.743266 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:35:19.743271 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:35:19.743328 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:35:19.767566 1193189 cri.go:89] found id: ""
	I1209 04:35:19.767581 1193189 logs.go:282] 0 containers: []
	W1209 04:35:19.767587 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:35:19.767592 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:35:19.767649 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:35:19.797227 1193189 cri.go:89] found id: ""
	I1209 04:35:19.797241 1193189 logs.go:282] 0 containers: []
	W1209 04:35:19.797248 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:35:19.797253 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:35:19.797311 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:35:19.820451 1193189 cri.go:89] found id: ""
	I1209 04:35:19.820465 1193189 logs.go:282] 0 containers: []
	W1209 04:35:19.820471 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:35:19.820477 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:35:19.820534 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:35:19.844577 1193189 cri.go:89] found id: ""
	I1209 04:35:19.844591 1193189 logs.go:282] 0 containers: []
	W1209 04:35:19.844597 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:35:19.844603 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:35:19.844661 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:35:19.868336 1193189 cri.go:89] found id: ""
	I1209 04:35:19.868350 1193189 logs.go:282] 0 containers: []
	W1209 04:35:19.868356 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:35:19.868362 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:35:19.868430 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:35:19.893016 1193189 cri.go:89] found id: ""
	I1209 04:35:19.893030 1193189 logs.go:282] 0 containers: []
	W1209 04:35:19.893037 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:35:19.893045 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:35:19.893055 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:35:19.947540 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:35:19.947561 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:35:19.964623 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:35:19.964640 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:35:20.041799 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:35:20.033487   12917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:20.034256   12917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:20.035804   12917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:20.036289   12917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:20.037849   12917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:35:20.033487   12917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:20.034256   12917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:20.035804   12917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:20.036289   12917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:20.037849   12917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:35:20.041809 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:35:20.041829 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:35:20.106338 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:35:20.106361 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:35:22.634358 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:35:22.644145 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:35:22.644208 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:35:22.670154 1193189 cri.go:89] found id: ""
	I1209 04:35:22.670171 1193189 logs.go:282] 0 containers: []
	W1209 04:35:22.670178 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:35:22.670189 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:35:22.670255 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:35:22.704705 1193189 cri.go:89] found id: ""
	I1209 04:35:22.704724 1193189 logs.go:282] 0 containers: []
	W1209 04:35:22.704731 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:35:22.704742 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:35:22.704815 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:35:22.729994 1193189 cri.go:89] found id: ""
	I1209 04:35:22.730010 1193189 logs.go:282] 0 containers: []
	W1209 04:35:22.730016 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:35:22.730021 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:35:22.730085 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:35:22.755372 1193189 cri.go:89] found id: ""
	I1209 04:35:22.755386 1193189 logs.go:282] 0 containers: []
	W1209 04:35:22.755393 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:35:22.755399 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:35:22.755468 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:35:22.781698 1193189 cri.go:89] found id: ""
	I1209 04:35:22.781712 1193189 logs.go:282] 0 containers: []
	W1209 04:35:22.781718 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:35:22.781724 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:35:22.781783 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:35:22.810395 1193189 cri.go:89] found id: ""
	I1209 04:35:22.810409 1193189 logs.go:282] 0 containers: []
	W1209 04:35:22.810417 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:35:22.810422 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:35:22.810491 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:35:22.834867 1193189 cri.go:89] found id: ""
	I1209 04:35:22.834881 1193189 logs.go:282] 0 containers: []
	W1209 04:35:22.834888 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:35:22.834896 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:35:22.834914 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:35:22.895493 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:35:22.895514 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:35:22.923338 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:35:22.923355 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:35:22.981048 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:35:22.981069 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:35:22.998202 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:35:22.998221 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:35:23.060221 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:35:23.052398   13038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:23.053078   13038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:23.054527   13038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:23.054989   13038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:23.056396   13038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:35:23.052398   13038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:23.053078   13038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:23.054527   13038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:23.054989   13038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:23.056396   13038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:35:25.561920 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:35:25.571773 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:35:25.571837 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:35:25.595193 1193189 cri.go:89] found id: ""
	I1209 04:35:25.595207 1193189 logs.go:282] 0 containers: []
	W1209 04:35:25.595215 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:35:25.595220 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:35:25.595285 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:35:25.619637 1193189 cri.go:89] found id: ""
	I1209 04:35:25.619651 1193189 logs.go:282] 0 containers: []
	W1209 04:35:25.619658 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:35:25.619664 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:35:25.619726 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:35:25.644298 1193189 cri.go:89] found id: ""
	I1209 04:35:25.644313 1193189 logs.go:282] 0 containers: []
	W1209 04:35:25.644319 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:35:25.644325 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:35:25.644384 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:35:25.668990 1193189 cri.go:89] found id: ""
	I1209 04:35:25.669003 1193189 logs.go:282] 0 containers: []
	W1209 04:35:25.669011 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:35:25.669016 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:35:25.669078 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:35:25.693184 1193189 cri.go:89] found id: ""
	I1209 04:35:25.693199 1193189 logs.go:282] 0 containers: []
	W1209 04:35:25.693206 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:35:25.693211 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:35:25.693269 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:35:25.718924 1193189 cri.go:89] found id: ""
	I1209 04:35:25.718939 1193189 logs.go:282] 0 containers: []
	W1209 04:35:25.718946 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:35:25.718951 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:35:25.719014 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:35:25.744270 1193189 cri.go:89] found id: ""
	I1209 04:35:25.744287 1193189 logs.go:282] 0 containers: []
	W1209 04:35:25.744294 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:35:25.744303 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:35:25.744313 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:35:25.775297 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:35:25.775312 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:35:25.830399 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:35:25.830417 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:35:25.846995 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:35:25.847011 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:35:25.907973 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:35:25.899536   13140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:25.899964   13140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:25.901112   13140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:25.902600   13140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:25.903121   13140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:35:25.899536   13140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:25.899964   13140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:25.901112   13140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:25.902600   13140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:25.903121   13140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:35:25.908000 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:35:25.908009 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:35:28.475800 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:35:28.486363 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:35:28.486434 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:35:28.511630 1193189 cri.go:89] found id: ""
	I1209 04:35:28.511649 1193189 logs.go:282] 0 containers: []
	W1209 04:35:28.511657 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:35:28.511662 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:35:28.511734 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:35:28.539616 1193189 cri.go:89] found id: ""
	I1209 04:35:28.539631 1193189 logs.go:282] 0 containers: []
	W1209 04:35:28.539638 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:35:28.539643 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:35:28.539704 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:35:28.563311 1193189 cri.go:89] found id: ""
	I1209 04:35:28.563325 1193189 logs.go:282] 0 containers: []
	W1209 04:35:28.563333 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:35:28.563338 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:35:28.563399 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:35:28.591490 1193189 cri.go:89] found id: ""
	I1209 04:35:28.591504 1193189 logs.go:282] 0 containers: []
	W1209 04:35:28.591511 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:35:28.591516 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:35:28.591574 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:35:28.614638 1193189 cri.go:89] found id: ""
	I1209 04:35:28.614653 1193189 logs.go:282] 0 containers: []
	W1209 04:35:28.614660 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:35:28.614665 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:35:28.614729 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:35:28.638698 1193189 cri.go:89] found id: ""
	I1209 04:35:28.638712 1193189 logs.go:282] 0 containers: []
	W1209 04:35:28.638720 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:35:28.638727 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:35:28.638788 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:35:28.665819 1193189 cri.go:89] found id: ""
	I1209 04:35:28.665837 1193189 logs.go:282] 0 containers: []
	W1209 04:35:28.665843 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:35:28.665851 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:35:28.665861 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:35:28.693372 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:35:28.693387 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:35:28.750183 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:35:28.750203 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:35:28.768641 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:35:28.768659 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:35:28.832332 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:35:28.823785   13239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:28.824261   13239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:28.826084   13239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:28.826770   13239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:28.828406   13239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:35:28.823785   13239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:28.824261   13239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:28.826084   13239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:28.826770   13239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:28.828406   13239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:35:28.832342 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:35:28.832352 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:35:31.394797 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:35:31.404399 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:35:31.404459 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:35:31.427865 1193189 cri.go:89] found id: ""
	I1209 04:35:31.427879 1193189 logs.go:282] 0 containers: []
	W1209 04:35:31.427886 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:35:31.427893 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:35:31.427957 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:35:31.465245 1193189 cri.go:89] found id: ""
	I1209 04:35:31.465259 1193189 logs.go:282] 0 containers: []
	W1209 04:35:31.465266 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:35:31.465271 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:35:31.465333 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:35:31.499189 1193189 cri.go:89] found id: ""
	I1209 04:35:31.499202 1193189 logs.go:282] 0 containers: []
	W1209 04:35:31.499209 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:35:31.499215 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:35:31.499272 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:35:31.525936 1193189 cri.go:89] found id: ""
	I1209 04:35:31.525950 1193189 logs.go:282] 0 containers: []
	W1209 04:35:31.525958 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:35:31.525963 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:35:31.526023 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:35:31.550933 1193189 cri.go:89] found id: ""
	I1209 04:35:31.550948 1193189 logs.go:282] 0 containers: []
	W1209 04:35:31.550955 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:35:31.550960 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:35:31.551019 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:35:31.574667 1193189 cri.go:89] found id: ""
	I1209 04:35:31.574681 1193189 logs.go:282] 0 containers: []
	W1209 04:35:31.574689 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:35:31.574694 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:35:31.574754 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:35:31.599346 1193189 cri.go:89] found id: ""
	I1209 04:35:31.599360 1193189 logs.go:282] 0 containers: []
	W1209 04:35:31.599367 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:35:31.599374 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:35:31.599384 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:35:31.625893 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:35:31.625912 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:35:31.681164 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:35:31.681181 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:35:31.697997 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:35:31.698014 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:35:31.765231 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:35:31.757080   13343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:31.757463   13343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:31.759010   13343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:31.759311   13343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:31.760784   13343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:35:31.757080   13343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:31.757463   13343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:31.759010   13343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:31.759311   13343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:31.760784   13343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:35:31.765242 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:35:31.765253 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:35:34.325149 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:35:34.334839 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:35:34.334897 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:35:34.359238 1193189 cri.go:89] found id: ""
	I1209 04:35:34.359251 1193189 logs.go:282] 0 containers: []
	W1209 04:35:34.359258 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:35:34.359263 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:35:34.359324 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:35:34.383217 1193189 cri.go:89] found id: ""
	I1209 04:35:34.383231 1193189 logs.go:282] 0 containers: []
	W1209 04:35:34.383237 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:35:34.383242 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:35:34.383301 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:35:34.407421 1193189 cri.go:89] found id: ""
	I1209 04:35:34.407435 1193189 logs.go:282] 0 containers: []
	W1209 04:35:34.407442 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:35:34.407454 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:35:34.407513 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:35:34.440852 1193189 cri.go:89] found id: ""
	I1209 04:35:34.440865 1193189 logs.go:282] 0 containers: []
	W1209 04:35:34.440872 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:35:34.440878 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:35:34.440938 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:35:34.474370 1193189 cri.go:89] found id: ""
	I1209 04:35:34.474382 1193189 logs.go:282] 0 containers: []
	W1209 04:35:34.474389 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:35:34.474400 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:35:34.474459 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:35:34.503074 1193189 cri.go:89] found id: ""
	I1209 04:35:34.503088 1193189 logs.go:282] 0 containers: []
	W1209 04:35:34.503095 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:35:34.503103 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:35:34.503160 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:35:34.533672 1193189 cri.go:89] found id: ""
	I1209 04:35:34.533686 1193189 logs.go:282] 0 containers: []
	W1209 04:35:34.533693 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:35:34.533701 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:35:34.533711 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:35:34.550119 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:35:34.550138 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:35:34.614817 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:35:34.606452   13437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:34.606849   13437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:34.608481   13437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:34.609159   13437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:34.610864   13437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:35:34.606452   13437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:34.606849   13437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:34.608481   13437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:34.609159   13437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:34.610864   13437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:35:34.614827 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:35:34.614837 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:35:34.677461 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:35:34.677482 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:35:34.703505 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:35:34.703520 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:35:37.258780 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:35:37.268941 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:35:37.269002 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:35:37.292668 1193189 cri.go:89] found id: ""
	I1209 04:35:37.292682 1193189 logs.go:282] 0 containers: []
	W1209 04:35:37.292689 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:35:37.292694 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:35:37.292757 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:35:37.320157 1193189 cri.go:89] found id: ""
	I1209 04:35:37.320171 1193189 logs.go:282] 0 containers: []
	W1209 04:35:37.320177 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:35:37.320183 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:35:37.320240 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:35:37.343858 1193189 cri.go:89] found id: ""
	I1209 04:35:37.343872 1193189 logs.go:282] 0 containers: []
	W1209 04:35:37.343879 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:35:37.343884 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:35:37.343947 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:35:37.366919 1193189 cri.go:89] found id: ""
	I1209 04:35:37.366932 1193189 logs.go:282] 0 containers: []
	W1209 04:35:37.366939 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:35:37.366945 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:35:37.367003 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:35:37.391330 1193189 cri.go:89] found id: ""
	I1209 04:35:37.391344 1193189 logs.go:282] 0 containers: []
	W1209 04:35:37.391351 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:35:37.391356 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:35:37.391417 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:35:37.414885 1193189 cri.go:89] found id: ""
	I1209 04:35:37.414899 1193189 logs.go:282] 0 containers: []
	W1209 04:35:37.414906 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:35:37.414911 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:35:37.414967 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:35:37.440557 1193189 cri.go:89] found id: ""
	I1209 04:35:37.440570 1193189 logs.go:282] 0 containers: []
	W1209 04:35:37.440577 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:35:37.440585 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:35:37.440595 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:35:37.501076 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:35:37.501094 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:35:37.523552 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:35:37.523569 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:35:37.590387 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:35:37.582017   13547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:37.582700   13547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:37.584424   13547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:37.584939   13547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:37.586547   13547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:35:37.582017   13547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:37.582700   13547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:37.584424   13547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:37.584939   13547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:37.586547   13547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:35:37.590397 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:35:37.590408 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:35:37.653090 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:35:37.653108 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:35:40.184839 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:35:40.195112 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:35:40.195177 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:35:40.221158 1193189 cri.go:89] found id: ""
	I1209 04:35:40.221173 1193189 logs.go:282] 0 containers: []
	W1209 04:35:40.221180 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:35:40.221185 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:35:40.221246 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:35:40.246395 1193189 cri.go:89] found id: ""
	I1209 04:35:40.246415 1193189 logs.go:282] 0 containers: []
	W1209 04:35:40.246422 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:35:40.246428 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:35:40.246487 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:35:40.270697 1193189 cri.go:89] found id: ""
	I1209 04:35:40.270711 1193189 logs.go:282] 0 containers: []
	W1209 04:35:40.270718 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:35:40.270723 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:35:40.270781 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:35:40.295006 1193189 cri.go:89] found id: ""
	I1209 04:35:40.295021 1193189 logs.go:282] 0 containers: []
	W1209 04:35:40.295028 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:35:40.295033 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:35:40.295093 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:35:40.319784 1193189 cri.go:89] found id: ""
	I1209 04:35:40.319797 1193189 logs.go:282] 0 containers: []
	W1209 04:35:40.319804 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:35:40.319810 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:35:40.319872 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:35:40.344094 1193189 cri.go:89] found id: ""
	I1209 04:35:40.344108 1193189 logs.go:282] 0 containers: []
	W1209 04:35:40.344115 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:35:40.344120 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:35:40.344181 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:35:40.368626 1193189 cri.go:89] found id: ""
	I1209 04:35:40.368640 1193189 logs.go:282] 0 containers: []
	W1209 04:35:40.368647 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:35:40.368654 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:35:40.368665 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:35:40.423837 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:35:40.423857 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:35:40.452134 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:35:40.452157 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:35:40.527559 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:35:40.519583   13649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:40.519986   13649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:40.521271   13649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:40.521835   13649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:40.523570   13649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:35:40.519583   13649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:40.519986   13649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:40.521271   13649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:40.521835   13649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:40.523570   13649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:35:40.527610 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:35:40.527620 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:35:40.588474 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:35:40.588495 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:35:43.118634 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:35:43.128671 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:35:43.128738 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:35:43.152143 1193189 cri.go:89] found id: ""
	I1209 04:35:43.152158 1193189 logs.go:282] 0 containers: []
	W1209 04:35:43.152179 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:35:43.152185 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:35:43.152255 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:35:43.176188 1193189 cri.go:89] found id: ""
	I1209 04:35:43.176203 1193189 logs.go:282] 0 containers: []
	W1209 04:35:43.176210 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:35:43.176215 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:35:43.176275 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:35:43.199682 1193189 cri.go:89] found id: ""
	I1209 04:35:43.199696 1193189 logs.go:282] 0 containers: []
	W1209 04:35:43.199702 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:35:43.199707 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:35:43.199767 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:35:43.224229 1193189 cri.go:89] found id: ""
	I1209 04:35:43.224244 1193189 logs.go:282] 0 containers: []
	W1209 04:35:43.224251 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:35:43.224257 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:35:43.224318 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:35:43.249684 1193189 cri.go:89] found id: ""
	I1209 04:35:43.249698 1193189 logs.go:282] 0 containers: []
	W1209 04:35:43.249705 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:35:43.249710 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:35:43.249773 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:35:43.273701 1193189 cri.go:89] found id: ""
	I1209 04:35:43.273715 1193189 logs.go:282] 0 containers: []
	W1209 04:35:43.273724 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:35:43.273729 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:35:43.273790 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:35:43.297360 1193189 cri.go:89] found id: ""
	I1209 04:35:43.297375 1193189 logs.go:282] 0 containers: []
	W1209 04:35:43.297382 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:35:43.297389 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:35:43.297400 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:35:43.323849 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:35:43.323865 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:35:43.380806 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:35:43.380825 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:35:43.397905 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:35:43.397924 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:35:43.474648 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:35:43.464143   13758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:43.464857   13758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:43.468210   13758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:43.468799   13758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:43.470475   13758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:35:43.464143   13758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:43.464857   13758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:43.468210   13758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:43.468799   13758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:43.470475   13758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:35:43.474658 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:35:43.474668 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:35:46.038037 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:35:46.048448 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:35:46.048513 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:35:46.073156 1193189 cri.go:89] found id: ""
	I1209 04:35:46.073170 1193189 logs.go:282] 0 containers: []
	W1209 04:35:46.073177 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:35:46.073182 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:35:46.073246 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:35:46.103227 1193189 cri.go:89] found id: ""
	I1209 04:35:46.103242 1193189 logs.go:282] 0 containers: []
	W1209 04:35:46.103249 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:35:46.103255 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:35:46.103324 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:35:46.126371 1193189 cri.go:89] found id: ""
	I1209 04:35:46.126385 1193189 logs.go:282] 0 containers: []
	W1209 04:35:46.126392 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:35:46.126397 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:35:46.126457 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:35:46.151271 1193189 cri.go:89] found id: ""
	I1209 04:35:46.151284 1193189 logs.go:282] 0 containers: []
	W1209 04:35:46.151291 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:35:46.151296 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:35:46.151354 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:35:46.175057 1193189 cri.go:89] found id: ""
	I1209 04:35:46.175071 1193189 logs.go:282] 0 containers: []
	W1209 04:35:46.175077 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:35:46.175082 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:35:46.175140 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:35:46.203063 1193189 cri.go:89] found id: ""
	I1209 04:35:46.203078 1193189 logs.go:282] 0 containers: []
	W1209 04:35:46.203085 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:35:46.203091 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:35:46.203148 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:35:46.229251 1193189 cri.go:89] found id: ""
	I1209 04:35:46.229267 1193189 logs.go:282] 0 containers: []
	W1209 04:35:46.229274 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:35:46.229281 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:35:46.229291 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:35:46.298699 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:35:46.289900   13846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:46.290515   13846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:46.292304   13846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:46.292640   13846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:46.294235   13846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:35:46.289900   13846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:46.290515   13846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:46.292304   13846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:46.292640   13846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:46.294235   13846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:35:46.298709 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:35:46.298720 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:35:46.363949 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:35:46.363976 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:35:46.391889 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:35:46.391906 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:35:46.454456 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:35:46.454483 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:35:48.975649 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:35:48.985708 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:35:48.985766 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:35:49.011399 1193189 cri.go:89] found id: ""
	I1209 04:35:49.011413 1193189 logs.go:282] 0 containers: []
	W1209 04:35:49.011420 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:35:49.011426 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:35:49.011483 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:35:49.036873 1193189 cri.go:89] found id: ""
	I1209 04:35:49.036887 1193189 logs.go:282] 0 containers: []
	W1209 04:35:49.036894 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:35:49.036899 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:35:49.036960 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:35:49.066005 1193189 cri.go:89] found id: ""
	I1209 04:35:49.066019 1193189 logs.go:282] 0 containers: []
	W1209 04:35:49.066025 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:35:49.066031 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:35:49.066091 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:35:49.093270 1193189 cri.go:89] found id: ""
	I1209 04:35:49.093284 1193189 logs.go:282] 0 containers: []
	W1209 04:35:49.093291 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:35:49.093297 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:35:49.093357 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:35:49.116583 1193189 cri.go:89] found id: ""
	I1209 04:35:49.116597 1193189 logs.go:282] 0 containers: []
	W1209 04:35:49.116604 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:35:49.116609 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:35:49.116667 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:35:49.141295 1193189 cri.go:89] found id: ""
	I1209 04:35:49.141309 1193189 logs.go:282] 0 containers: []
	W1209 04:35:49.141316 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:35:49.141321 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:35:49.141382 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:35:49.164496 1193189 cri.go:89] found id: ""
	I1209 04:35:49.164509 1193189 logs.go:282] 0 containers: []
	W1209 04:35:49.164516 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:35:49.164524 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:35:49.164533 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:35:49.220406 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:35:49.220426 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:35:49.237143 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:35:49.237159 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:35:49.305702 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:35:49.296253   13955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:49.297596   13955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:49.298689   13955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:49.299456   13955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:49.301121   13955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:35:49.296253   13955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:49.297596   13955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:49.298689   13955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:49.299456   13955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:49.301121   13955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:35:49.305724 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:35:49.305737 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:35:49.367200 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:35:49.367219 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:35:51.895283 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:35:51.905706 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:35:51.905765 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:35:51.929677 1193189 cri.go:89] found id: ""
	I1209 04:35:51.929691 1193189 logs.go:282] 0 containers: []
	W1209 04:35:51.929698 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:35:51.929703 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:35:51.929764 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:35:51.953232 1193189 cri.go:89] found id: ""
	I1209 04:35:51.953246 1193189 logs.go:282] 0 containers: []
	W1209 04:35:51.953252 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:35:51.953257 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:35:51.953314 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:35:51.979515 1193189 cri.go:89] found id: ""
	I1209 04:35:51.979528 1193189 logs.go:282] 0 containers: []
	W1209 04:35:51.979535 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:35:51.979540 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:35:51.979601 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:35:52.009061 1193189 cri.go:89] found id: ""
	I1209 04:35:52.009075 1193189 logs.go:282] 0 containers: []
	W1209 04:35:52.009082 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:35:52.009087 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:35:52.009154 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:35:52.036289 1193189 cri.go:89] found id: ""
	I1209 04:35:52.036309 1193189 logs.go:282] 0 containers: []
	W1209 04:35:52.036316 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:35:52.036321 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:35:52.036386 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:35:52.061853 1193189 cri.go:89] found id: ""
	I1209 04:35:52.061867 1193189 logs.go:282] 0 containers: []
	W1209 04:35:52.061874 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:35:52.061879 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:35:52.061942 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:35:52.090416 1193189 cri.go:89] found id: ""
	I1209 04:35:52.090443 1193189 logs.go:282] 0 containers: []
	W1209 04:35:52.090451 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:35:52.090459 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:35:52.090469 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:35:52.120980 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:35:52.120996 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:35:52.177079 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:35:52.177098 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:35:52.195520 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:35:52.195537 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:35:52.260151 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:35:52.251913   14072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:52.252734   14072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:52.254403   14072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:52.254982   14072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:52.256470   14072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:35:52.251913   14072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:52.252734   14072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:52.254403   14072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:52.254982   14072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:52.256470   14072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:35:52.260161 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:35:52.260172 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:35:54.821803 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:35:54.831356 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:35:54.831415 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:35:54.855283 1193189 cri.go:89] found id: ""
	I1209 04:35:54.855298 1193189 logs.go:282] 0 containers: []
	W1209 04:35:54.855304 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:35:54.855309 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:35:54.855369 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:35:54.889160 1193189 cri.go:89] found id: ""
	I1209 04:35:54.889174 1193189 logs.go:282] 0 containers: []
	W1209 04:35:54.889181 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:35:54.889186 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:35:54.889245 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:35:54.912925 1193189 cri.go:89] found id: ""
	I1209 04:35:54.912939 1193189 logs.go:282] 0 containers: []
	W1209 04:35:54.912946 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:35:54.912951 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:35:54.913019 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:35:54.937856 1193189 cri.go:89] found id: ""
	I1209 04:35:54.937869 1193189 logs.go:282] 0 containers: []
	W1209 04:35:54.937876 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:35:54.937881 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:35:54.937939 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:35:54.961607 1193189 cri.go:89] found id: ""
	I1209 04:35:54.961620 1193189 logs.go:282] 0 containers: []
	W1209 04:35:54.961626 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:35:54.961632 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:35:54.961692 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:35:54.984614 1193189 cri.go:89] found id: ""
	I1209 04:35:54.984627 1193189 logs.go:282] 0 containers: []
	W1209 04:35:54.984634 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:35:54.984639 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:35:54.984702 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:35:55.019938 1193189 cri.go:89] found id: ""
	I1209 04:35:55.019952 1193189 logs.go:282] 0 containers: []
	W1209 04:35:55.019959 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:35:55.019967 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:35:55.019977 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:35:55.076703 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:35:55.076722 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:35:55.094781 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:35:55.094801 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:35:55.164076 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:35:55.155994   14165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:55.156899   14165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:55.158415   14165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:55.158819   14165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:55.160056   14165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:35:55.155994   14165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:55.156899   14165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:55.158415   14165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:55.158819   14165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:55.160056   14165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:35:55.164088 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:35:55.164098 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:35:55.225429 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:35:55.225451 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:35:57.756131 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:35:57.766096 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:35:57.766152 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:35:57.794059 1193189 cri.go:89] found id: ""
	I1209 04:35:57.794073 1193189 logs.go:282] 0 containers: []
	W1209 04:35:57.794080 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:35:57.794085 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:35:57.794142 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:35:57.817501 1193189 cri.go:89] found id: ""
	I1209 04:35:57.817514 1193189 logs.go:282] 0 containers: []
	W1209 04:35:57.817520 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:35:57.817526 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:35:57.817582 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:35:57.841800 1193189 cri.go:89] found id: ""
	I1209 04:35:57.841814 1193189 logs.go:282] 0 containers: []
	W1209 04:35:57.841821 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:35:57.841841 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:35:57.841905 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:35:57.865096 1193189 cri.go:89] found id: ""
	I1209 04:35:57.865109 1193189 logs.go:282] 0 containers: []
	W1209 04:35:57.865116 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:35:57.865122 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:35:57.865185 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:35:57.889214 1193189 cri.go:89] found id: ""
	I1209 04:35:57.889227 1193189 logs.go:282] 0 containers: []
	W1209 04:35:57.889234 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:35:57.889240 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:35:57.889299 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:35:57.913077 1193189 cri.go:89] found id: ""
	I1209 04:35:57.913090 1193189 logs.go:282] 0 containers: []
	W1209 04:35:57.913097 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:35:57.913102 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:35:57.913164 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:35:57.938101 1193189 cri.go:89] found id: ""
	I1209 04:35:57.938114 1193189 logs.go:282] 0 containers: []
	W1209 04:35:57.938121 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:35:57.938129 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:35:57.938139 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:35:57.968546 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:35:57.968563 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:35:58.025605 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:35:58.025626 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:35:58.042537 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:35:58.042554 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:35:58.112285 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:35:58.104144   14281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:58.104837   14281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:58.106385   14281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:58.106802   14281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:58.108456   14281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:35:58.104144   14281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:58.104837   14281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:58.106385   14281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:58.106802   14281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:58.108456   14281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:35:58.112295 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:35:58.112317 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:36:00.674623 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:36:00.684871 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:36:00.684932 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:36:00.723046 1193189 cri.go:89] found id: ""
	I1209 04:36:00.723060 1193189 logs.go:282] 0 containers: []
	W1209 04:36:00.723067 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:36:00.723082 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:36:00.723142 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:36:00.755063 1193189 cri.go:89] found id: ""
	I1209 04:36:00.755077 1193189 logs.go:282] 0 containers: []
	W1209 04:36:00.755094 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:36:00.755100 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:36:00.755170 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:36:00.780343 1193189 cri.go:89] found id: ""
	I1209 04:36:00.780357 1193189 logs.go:282] 0 containers: []
	W1209 04:36:00.780368 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:36:00.780373 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:36:00.780432 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:36:00.805177 1193189 cri.go:89] found id: ""
	I1209 04:36:00.805191 1193189 logs.go:282] 0 containers: []
	W1209 04:36:00.805198 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:36:00.805203 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:36:00.805261 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:36:00.829413 1193189 cri.go:89] found id: ""
	I1209 04:36:00.829426 1193189 logs.go:282] 0 containers: []
	W1209 04:36:00.829432 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:36:00.829439 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:36:00.829500 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:36:00.853086 1193189 cri.go:89] found id: ""
	I1209 04:36:00.853100 1193189 logs.go:282] 0 containers: []
	W1209 04:36:00.853107 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:36:00.853112 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:36:00.853185 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:36:00.881064 1193189 cri.go:89] found id: ""
	I1209 04:36:00.881078 1193189 logs.go:282] 0 containers: []
	W1209 04:36:00.881085 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:36:00.881093 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:36:00.881103 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:36:00.950102 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:36:00.942130   14365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:00.942767   14365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:00.944430   14365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:00.944779   14365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:00.946290   14365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:36:00.942130   14365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:00.942767   14365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:00.944430   14365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:00.944779   14365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:00.946290   14365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:36:00.950112 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:36:00.950123 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:36:01.012065 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:36:01.012086 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:36:01.041323 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:36:01.041339 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:36:01.099024 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:36:01.099044 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:36:03.616785 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:36:03.626636 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:36:03.626697 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:36:03.650973 1193189 cri.go:89] found id: ""
	I1209 04:36:03.650987 1193189 logs.go:282] 0 containers: []
	W1209 04:36:03.650994 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:36:03.650999 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:36:03.651060 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:36:03.674678 1193189 cri.go:89] found id: ""
	I1209 04:36:03.674692 1193189 logs.go:282] 0 containers: []
	W1209 04:36:03.674699 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:36:03.674705 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:36:03.674777 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:36:03.705193 1193189 cri.go:89] found id: ""
	I1209 04:36:03.705206 1193189 logs.go:282] 0 containers: []
	W1209 04:36:03.705213 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:36:03.705218 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:36:03.705281 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:36:03.733013 1193189 cri.go:89] found id: ""
	I1209 04:36:03.733026 1193189 logs.go:282] 0 containers: []
	W1209 04:36:03.733033 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:36:03.733038 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:36:03.733096 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:36:03.770375 1193189 cri.go:89] found id: ""
	I1209 04:36:03.770389 1193189 logs.go:282] 0 containers: []
	W1209 04:36:03.770396 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:36:03.770401 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:36:03.770457 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:36:03.793967 1193189 cri.go:89] found id: ""
	I1209 04:36:03.793980 1193189 logs.go:282] 0 containers: []
	W1209 04:36:03.793987 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:36:03.793992 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:36:03.794053 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:36:03.818652 1193189 cri.go:89] found id: ""
	I1209 04:36:03.818666 1193189 logs.go:282] 0 containers: []
	W1209 04:36:03.818672 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:36:03.818681 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:36:03.818691 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:36:03.873671 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:36:03.873692 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:36:03.890142 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:36:03.890159 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:36:03.958206 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:36:03.950384   14475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:03.950766   14475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:03.952402   14475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:03.952806   14475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:03.954365   14475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:36:03.950384   14475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:03.950766   14475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:03.952402   14475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:03.952806   14475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:03.954365   14475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:36:03.958216 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:36:03.958227 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:36:04.019401 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:36:04.019421 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:36:06.551878 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:36:06.561600 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:36:06.561657 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:36:06.585277 1193189 cri.go:89] found id: ""
	I1209 04:36:06.585291 1193189 logs.go:282] 0 containers: []
	W1209 04:36:06.585298 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:36:06.585304 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:36:06.585366 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:36:06.613401 1193189 cri.go:89] found id: ""
	I1209 04:36:06.613415 1193189 logs.go:282] 0 containers: []
	W1209 04:36:06.613421 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:36:06.613426 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:36:06.613483 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:36:06.642329 1193189 cri.go:89] found id: ""
	I1209 04:36:06.642342 1193189 logs.go:282] 0 containers: []
	W1209 04:36:06.642349 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:36:06.642354 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:36:06.642413 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:36:06.666445 1193189 cri.go:89] found id: ""
	I1209 04:36:06.666458 1193189 logs.go:282] 0 containers: []
	W1209 04:36:06.666465 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:36:06.666470 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:36:06.666527 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:36:06.695405 1193189 cri.go:89] found id: ""
	I1209 04:36:06.695419 1193189 logs.go:282] 0 containers: []
	W1209 04:36:06.695425 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:36:06.695431 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:36:06.695488 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:36:06.734331 1193189 cri.go:89] found id: ""
	I1209 04:36:06.734345 1193189 logs.go:282] 0 containers: []
	W1209 04:36:06.734361 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:36:06.734372 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:36:06.734441 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:36:06.766210 1193189 cri.go:89] found id: ""
	I1209 04:36:06.766223 1193189 logs.go:282] 0 containers: []
	W1209 04:36:06.766231 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:36:06.766238 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:36:06.766248 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:36:06.822607 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:36:06.822627 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:36:06.839326 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:36:06.839342 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:36:06.900387 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:36:06.892243   14581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:06.892630   14581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:06.894401   14581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:06.894869   14581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:06.896343   14581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:36:06.892243   14581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:06.892630   14581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:06.894401   14581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:06.894869   14581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:06.896343   14581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:36:06.900405 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:36:06.900421 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:36:06.961047 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:36:06.961067 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:36:09.488140 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:36:09.498332 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:36:09.498409 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:36:09.523347 1193189 cri.go:89] found id: ""
	I1209 04:36:09.523373 1193189 logs.go:282] 0 containers: []
	W1209 04:36:09.523380 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:36:09.523387 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:36:09.523459 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:36:09.550096 1193189 cri.go:89] found id: ""
	I1209 04:36:09.550111 1193189 logs.go:282] 0 containers: []
	W1209 04:36:09.550117 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:36:09.550123 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:36:09.550185 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:36:09.578695 1193189 cri.go:89] found id: ""
	I1209 04:36:09.578709 1193189 logs.go:282] 0 containers: []
	W1209 04:36:09.578715 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:36:09.578720 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:36:09.578784 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:36:09.607079 1193189 cri.go:89] found id: ""
	I1209 04:36:09.607093 1193189 logs.go:282] 0 containers: []
	W1209 04:36:09.607100 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:36:09.607105 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:36:09.607166 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:36:09.635495 1193189 cri.go:89] found id: ""
	I1209 04:36:09.635510 1193189 logs.go:282] 0 containers: []
	W1209 04:36:09.635516 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:36:09.635521 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:36:09.635584 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:36:09.661747 1193189 cri.go:89] found id: ""
	I1209 04:36:09.661761 1193189 logs.go:282] 0 containers: []
	W1209 04:36:09.661767 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:36:09.661773 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:36:09.661831 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:36:09.694535 1193189 cri.go:89] found id: ""
	I1209 04:36:09.694549 1193189 logs.go:282] 0 containers: []
	W1209 04:36:09.694556 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:36:09.694564 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:36:09.694574 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:36:09.759636 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:36:09.759656 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:36:09.777485 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:36:09.777502 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:36:09.841963 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:36:09.834188   14687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:09.834610   14687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:09.836196   14687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:09.836779   14687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:09.838239   14687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:36:09.834188   14687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:09.834610   14687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:09.836196   14687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:09.836779   14687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:09.838239   14687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:36:09.841974 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:36:09.841984 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:36:09.904615 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:36:09.904636 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:36:12.433539 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:36:12.443370 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:36:12.443435 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:36:12.469616 1193189 cri.go:89] found id: ""
	I1209 04:36:12.469630 1193189 logs.go:282] 0 containers: []
	W1209 04:36:12.469637 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:36:12.469643 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:36:12.469704 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:36:12.493917 1193189 cri.go:89] found id: ""
	I1209 04:36:12.493930 1193189 logs.go:282] 0 containers: []
	W1209 04:36:12.493937 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:36:12.493942 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:36:12.494001 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:36:12.518803 1193189 cri.go:89] found id: ""
	I1209 04:36:12.518817 1193189 logs.go:282] 0 containers: []
	W1209 04:36:12.518842 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:36:12.518848 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:36:12.518917 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:36:12.542764 1193189 cri.go:89] found id: ""
	I1209 04:36:12.542785 1193189 logs.go:282] 0 containers: []
	W1209 04:36:12.542792 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:36:12.542797 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:36:12.542859 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:36:12.566738 1193189 cri.go:89] found id: ""
	I1209 04:36:12.566751 1193189 logs.go:282] 0 containers: []
	W1209 04:36:12.566758 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:36:12.566762 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:36:12.566830 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:36:12.594757 1193189 cri.go:89] found id: ""
	I1209 04:36:12.594772 1193189 logs.go:282] 0 containers: []
	W1209 04:36:12.594778 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:36:12.594784 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:36:12.594850 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:36:12.619407 1193189 cri.go:89] found id: ""
	I1209 04:36:12.619421 1193189 logs.go:282] 0 containers: []
	W1209 04:36:12.619427 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:36:12.619434 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:36:12.619445 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:36:12.692974 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:36:12.683791   14780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:12.684626   14780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:12.686439   14780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:12.687100   14780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:12.688999   14780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:36:12.683791   14780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:12.684626   14780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:12.686439   14780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:12.687100   14780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:12.688999   14780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:36:12.692984 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:36:12.693001 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:36:12.766313 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:36:12.766340 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:36:12.793057 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:36:12.793075 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:36:12.849665 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:36:12.849689 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:36:15.366796 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:36:15.376649 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:36:15.376719 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:36:15.400344 1193189 cri.go:89] found id: ""
	I1209 04:36:15.400358 1193189 logs.go:282] 0 containers: []
	W1209 04:36:15.400372 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:36:15.400378 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:36:15.400437 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:36:15.425809 1193189 cri.go:89] found id: ""
	I1209 04:36:15.425822 1193189 logs.go:282] 0 containers: []
	W1209 04:36:15.425829 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:36:15.425834 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:36:15.425894 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:36:15.450444 1193189 cri.go:89] found id: ""
	I1209 04:36:15.450458 1193189 logs.go:282] 0 containers: []
	W1209 04:36:15.450466 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:36:15.450471 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:36:15.450531 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:36:15.478163 1193189 cri.go:89] found id: ""
	I1209 04:36:15.478178 1193189 logs.go:282] 0 containers: []
	W1209 04:36:15.478185 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:36:15.478190 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:36:15.478261 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:36:15.502360 1193189 cri.go:89] found id: ""
	I1209 04:36:15.502374 1193189 logs.go:282] 0 containers: []
	W1209 04:36:15.502381 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:36:15.502386 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:36:15.502450 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:36:15.530599 1193189 cri.go:89] found id: ""
	I1209 04:36:15.530614 1193189 logs.go:282] 0 containers: []
	W1209 04:36:15.530620 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:36:15.530626 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:36:15.530693 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:36:15.554654 1193189 cri.go:89] found id: ""
	I1209 04:36:15.554668 1193189 logs.go:282] 0 containers: []
	W1209 04:36:15.554675 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:36:15.554683 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:36:15.554693 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:36:15.614962 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:36:15.614982 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:36:15.641417 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:36:15.641433 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:36:15.696674 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:36:15.696692 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:36:15.714032 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:36:15.714047 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:36:15.786226 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:36:15.778061   14907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:15.778499   14907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:15.780149   14907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:15.780759   14907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:15.782381   14907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:36:15.778061   14907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:15.778499   14907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:15.780149   14907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:15.780759   14907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:15.782381   14907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:36:18.286483 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:36:18.296288 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:36:18.296346 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:36:18.323616 1193189 cri.go:89] found id: ""
	I1209 04:36:18.323629 1193189 logs.go:282] 0 containers: []
	W1209 04:36:18.323636 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:36:18.323642 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:36:18.323706 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:36:18.348203 1193189 cri.go:89] found id: ""
	I1209 04:36:18.348218 1193189 logs.go:282] 0 containers: []
	W1209 04:36:18.348225 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:36:18.348231 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:36:18.348290 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:36:18.372639 1193189 cri.go:89] found id: ""
	I1209 04:36:18.372653 1193189 logs.go:282] 0 containers: []
	W1209 04:36:18.372660 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:36:18.372671 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:36:18.372732 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:36:18.400006 1193189 cri.go:89] found id: ""
	I1209 04:36:18.400037 1193189 logs.go:282] 0 containers: []
	W1209 04:36:18.400044 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:36:18.400049 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:36:18.400120 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:36:18.424038 1193189 cri.go:89] found id: ""
	I1209 04:36:18.424053 1193189 logs.go:282] 0 containers: []
	W1209 04:36:18.424060 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:36:18.424068 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:36:18.424135 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:36:18.447692 1193189 cri.go:89] found id: ""
	I1209 04:36:18.447719 1193189 logs.go:282] 0 containers: []
	W1209 04:36:18.447726 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:36:18.447737 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:36:18.447809 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:36:18.473888 1193189 cri.go:89] found id: ""
	I1209 04:36:18.473902 1193189 logs.go:282] 0 containers: []
	W1209 04:36:18.473908 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:36:18.473916 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:36:18.473925 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:36:18.531920 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:36:18.531945 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:36:18.549523 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:36:18.549540 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:36:18.610296 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:36:18.601988   14993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:18.602374   14993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:18.603904   14993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:18.604520   14993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:18.606270   14993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:36:18.601988   14993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:18.602374   14993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:18.603904   14993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:18.604520   14993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:18.606270   14993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:36:18.610306 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:36:18.610316 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:36:18.673185 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:36:18.673204 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:36:21.215945 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:36:21.225779 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:36:21.225842 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:36:21.251614 1193189 cri.go:89] found id: ""
	I1209 04:36:21.251627 1193189 logs.go:282] 0 containers: []
	W1209 04:36:21.251633 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:36:21.251639 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:36:21.251701 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:36:21.274988 1193189 cri.go:89] found id: ""
	I1209 04:36:21.275002 1193189 logs.go:282] 0 containers: []
	W1209 04:36:21.275009 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:36:21.275016 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:36:21.275073 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:36:21.298100 1193189 cri.go:89] found id: ""
	I1209 04:36:21.298113 1193189 logs.go:282] 0 containers: []
	W1209 04:36:21.298120 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:36:21.298125 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:36:21.298188 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:36:21.323043 1193189 cri.go:89] found id: ""
	I1209 04:36:21.323057 1193189 logs.go:282] 0 containers: []
	W1209 04:36:21.323063 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:36:21.323068 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:36:21.323128 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:36:21.346629 1193189 cri.go:89] found id: ""
	I1209 04:36:21.346642 1193189 logs.go:282] 0 containers: []
	W1209 04:36:21.346649 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:36:21.346654 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:36:21.346713 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:36:21.370687 1193189 cri.go:89] found id: ""
	I1209 04:36:21.370700 1193189 logs.go:282] 0 containers: []
	W1209 04:36:21.370707 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:36:21.370712 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:36:21.370767 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:36:21.394774 1193189 cri.go:89] found id: ""
	I1209 04:36:21.394788 1193189 logs.go:282] 0 containers: []
	W1209 04:36:21.394794 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:36:21.394803 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:36:21.394813 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:36:21.458240 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:36:21.449927   15094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:21.450664   15094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:21.452537   15094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:21.452900   15094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:21.454442   15094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:36:21.449927   15094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:21.450664   15094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:21.452537   15094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:21.452900   15094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:21.454442   15094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:36:21.458249 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:36:21.458260 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:36:21.519830 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:36:21.519850 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:36:21.556076 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:36:21.556093 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:36:21.614749 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:36:21.614769 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:36:24.132222 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:36:24.143277 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:36:24.143352 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:36:24.173051 1193189 cri.go:89] found id: ""
	I1209 04:36:24.173065 1193189 logs.go:282] 0 containers: []
	W1209 04:36:24.173072 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:36:24.173077 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:36:24.173134 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:36:24.198407 1193189 cri.go:89] found id: ""
	I1209 04:36:24.198421 1193189 logs.go:282] 0 containers: []
	W1209 04:36:24.198428 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:36:24.198432 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:36:24.198490 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:36:24.224986 1193189 cri.go:89] found id: ""
	I1209 04:36:24.225000 1193189 logs.go:282] 0 containers: []
	W1209 04:36:24.225007 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:36:24.225012 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:36:24.225071 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:36:24.249942 1193189 cri.go:89] found id: ""
	I1209 04:36:24.249957 1193189 logs.go:282] 0 containers: []
	W1209 04:36:24.249964 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:36:24.249969 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:36:24.250031 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:36:24.274252 1193189 cri.go:89] found id: ""
	I1209 04:36:24.274266 1193189 logs.go:282] 0 containers: []
	W1209 04:36:24.274273 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:36:24.274278 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:36:24.274347 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:36:24.302468 1193189 cri.go:89] found id: ""
	I1209 04:36:24.302485 1193189 logs.go:282] 0 containers: []
	W1209 04:36:24.302491 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:36:24.302497 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:36:24.302582 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:36:24.328883 1193189 cri.go:89] found id: ""
	I1209 04:36:24.328898 1193189 logs.go:282] 0 containers: []
	W1209 04:36:24.328905 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:36:24.328913 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:36:24.328923 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:36:24.386082 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:36:24.386102 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:36:24.403782 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:36:24.403798 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:36:24.473588 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:36:24.462744   15202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:24.463330   15202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:24.466259   15202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:24.467650   15202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:24.468411   15202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:36:24.462744   15202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:24.463330   15202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:24.466259   15202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:24.467650   15202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:24.468411   15202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:36:24.473598 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:36:24.473609 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:36:24.534819 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:36:24.534841 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:36:27.064221 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:36:27.074260 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:36:27.074334 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:36:27.098417 1193189 cri.go:89] found id: ""
	I1209 04:36:27.098445 1193189 logs.go:282] 0 containers: []
	W1209 04:36:27.098452 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:36:27.098457 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:36:27.098527 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:36:27.126158 1193189 cri.go:89] found id: ""
	I1209 04:36:27.126172 1193189 logs.go:282] 0 containers: []
	W1209 04:36:27.126184 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:36:27.126189 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:36:27.126250 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:36:27.154258 1193189 cri.go:89] found id: ""
	I1209 04:36:27.154271 1193189 logs.go:282] 0 containers: []
	W1209 04:36:27.154278 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:36:27.154284 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:36:27.154343 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:36:27.179273 1193189 cri.go:89] found id: ""
	I1209 04:36:27.179286 1193189 logs.go:282] 0 containers: []
	W1209 04:36:27.179293 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:36:27.179309 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:36:27.179367 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:36:27.204706 1193189 cri.go:89] found id: ""
	I1209 04:36:27.204720 1193189 logs.go:282] 0 containers: []
	W1209 04:36:27.204727 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:36:27.204732 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:36:27.204791 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:36:27.230005 1193189 cri.go:89] found id: ""
	I1209 04:36:27.230019 1193189 logs.go:282] 0 containers: []
	W1209 04:36:27.230026 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:36:27.230032 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:36:27.230098 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:36:27.254482 1193189 cri.go:89] found id: ""
	I1209 04:36:27.254496 1193189 logs.go:282] 0 containers: []
	W1209 04:36:27.254512 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:36:27.254521 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:36:27.254531 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:36:27.310002 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:36:27.310022 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:36:27.327694 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:36:27.327713 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:36:27.395258 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:36:27.386987   15311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:27.387759   15311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:27.389469   15311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:27.389968   15311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:27.391467   15311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:36:27.386987   15311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:27.387759   15311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:27.389469   15311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:27.389968   15311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:27.391467   15311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:36:27.395269 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:36:27.395279 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:36:27.457675 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:36:27.457694 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:36:29.986185 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:36:30.005634 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:36:30.005711 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:36:30.038694 1193189 cri.go:89] found id: ""
	I1209 04:36:30.038709 1193189 logs.go:282] 0 containers: []
	W1209 04:36:30.038717 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:36:30.038723 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:36:30.038792 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:36:30.065088 1193189 cri.go:89] found id: ""
	I1209 04:36:30.065110 1193189 logs.go:282] 0 containers: []
	W1209 04:36:30.065119 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:36:30.065124 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:36:30.065188 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:36:30.090159 1193189 cri.go:89] found id: ""
	I1209 04:36:30.090173 1193189 logs.go:282] 0 containers: []
	W1209 04:36:30.090180 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:36:30.090185 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:36:30.090250 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:36:30.118708 1193189 cri.go:89] found id: ""
	I1209 04:36:30.118721 1193189 logs.go:282] 0 containers: []
	W1209 04:36:30.118728 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:36:30.118734 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:36:30.118796 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:36:30.146404 1193189 cri.go:89] found id: ""
	I1209 04:36:30.146417 1193189 logs.go:282] 0 containers: []
	W1209 04:36:30.146424 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:36:30.146429 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:36:30.146488 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:36:30.170089 1193189 cri.go:89] found id: ""
	I1209 04:36:30.170102 1193189 logs.go:282] 0 containers: []
	W1209 04:36:30.170109 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:36:30.170114 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:36:30.170171 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:36:30.194303 1193189 cri.go:89] found id: ""
	I1209 04:36:30.194317 1193189 logs.go:282] 0 containers: []
	W1209 04:36:30.194327 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:36:30.194334 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:36:30.194344 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:36:30.230597 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:36:30.230613 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:36:30.285894 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:36:30.285913 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:36:30.303774 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:36:30.303789 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:36:30.370275 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:36:30.361691   15431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:30.362453   15431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:30.364059   15431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:30.364598   15431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:30.366280   15431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:36:30.361691   15431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:30.362453   15431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:30.364059   15431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:30.364598   15431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:30.366280   15431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:36:30.370284 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:36:30.370297 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:36:32.932454 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:36:32.942712 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:36:32.942772 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:36:32.970393 1193189 cri.go:89] found id: ""
	I1209 04:36:32.970406 1193189 logs.go:282] 0 containers: []
	W1209 04:36:32.970413 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:36:32.970418 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:36:32.970480 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:36:33.001462 1193189 cri.go:89] found id: ""
	I1209 04:36:33.001476 1193189 logs.go:282] 0 containers: []
	W1209 04:36:33.001489 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:36:33.001495 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:36:33.001561 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:36:33.027773 1193189 cri.go:89] found id: ""
	I1209 04:36:33.027787 1193189 logs.go:282] 0 containers: []
	W1209 04:36:33.027794 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:36:33.027799 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:36:33.027858 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:36:33.054066 1193189 cri.go:89] found id: ""
	I1209 04:36:33.054080 1193189 logs.go:282] 0 containers: []
	W1209 04:36:33.054086 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:36:33.054091 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:36:33.054152 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:36:33.077043 1193189 cri.go:89] found id: ""
	I1209 04:36:33.077057 1193189 logs.go:282] 0 containers: []
	W1209 04:36:33.077064 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:36:33.077069 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:36:33.077127 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:36:33.101043 1193189 cri.go:89] found id: ""
	I1209 04:36:33.101056 1193189 logs.go:282] 0 containers: []
	W1209 04:36:33.101063 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:36:33.101068 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:36:33.101126 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:36:33.125074 1193189 cri.go:89] found id: ""
	I1209 04:36:33.125088 1193189 logs.go:282] 0 containers: []
	W1209 04:36:33.125096 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:36:33.125104 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:36:33.125115 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:36:33.181829 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:36:33.181849 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:36:33.198599 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:36:33.198616 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:36:33.259348 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:36:33.250653   15527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:33.251506   15527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:33.253061   15527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:33.253668   15527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:33.255199   15527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:36:33.250653   15527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:33.251506   15527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:33.253061   15527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:33.253668   15527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:33.255199   15527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:36:33.259358 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:36:33.259369 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:36:33.321638 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:36:33.321660 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:36:35.847785 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:36:35.857973 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:36:35.858039 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:36:35.882819 1193189 cri.go:89] found id: ""
	I1209 04:36:35.882832 1193189 logs.go:282] 0 containers: []
	W1209 04:36:35.882839 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:36:35.882844 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:36:35.882908 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:36:35.911762 1193189 cri.go:89] found id: ""
	I1209 04:36:35.911776 1193189 logs.go:282] 0 containers: []
	W1209 04:36:35.911784 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:36:35.911789 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:36:35.911849 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:36:35.946631 1193189 cri.go:89] found id: ""
	I1209 04:36:35.946646 1193189 logs.go:282] 0 containers: []
	W1209 04:36:35.946652 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:36:35.946663 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:36:35.946721 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:36:35.972345 1193189 cri.go:89] found id: ""
	I1209 04:36:35.972360 1193189 logs.go:282] 0 containers: []
	W1209 04:36:35.972367 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:36:35.972372 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:36:35.972438 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:36:36.010844 1193189 cri.go:89] found id: ""
	I1209 04:36:36.010859 1193189 logs.go:282] 0 containers: []
	W1209 04:36:36.010867 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:36:36.010876 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:36:36.010940 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:36:36.036297 1193189 cri.go:89] found id: ""
	I1209 04:36:36.036310 1193189 logs.go:282] 0 containers: []
	W1209 04:36:36.036317 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:36:36.036323 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:36:36.036387 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:36:36.066383 1193189 cri.go:89] found id: ""
	I1209 04:36:36.066398 1193189 logs.go:282] 0 containers: []
	W1209 04:36:36.066404 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:36:36.066412 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:36:36.066422 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:36:36.123320 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:36:36.123340 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:36:36.141674 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:36:36.141691 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:36:36.207738 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:36:36.198534   15631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:36.199238   15631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:36.201129   15631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:36.201829   15631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:36.203559   15631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:36:36.198534   15631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:36.199238   15631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:36.201129   15631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:36.201829   15631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:36.203559   15631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:36:36.207749 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:36:36.207760 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:36:36.271530 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:36:36.271553 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:36:38.808031 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:36:38.818384 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:36:38.818445 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:36:38.842672 1193189 cri.go:89] found id: ""
	I1209 04:36:38.842686 1193189 logs.go:282] 0 containers: []
	W1209 04:36:38.842692 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:36:38.842697 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:36:38.842757 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:36:38.867351 1193189 cri.go:89] found id: ""
	I1209 04:36:38.867365 1193189 logs.go:282] 0 containers: []
	W1209 04:36:38.867371 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:36:38.867376 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:36:38.867436 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:36:38.891443 1193189 cri.go:89] found id: ""
	I1209 04:36:38.891456 1193189 logs.go:282] 0 containers: []
	W1209 04:36:38.891463 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:36:38.891469 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:36:38.891530 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:36:38.916345 1193189 cri.go:89] found id: ""
	I1209 04:36:38.916359 1193189 logs.go:282] 0 containers: []
	W1209 04:36:38.916366 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:36:38.916371 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:36:38.916435 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:36:38.949316 1193189 cri.go:89] found id: ""
	I1209 04:36:38.949330 1193189 logs.go:282] 0 containers: []
	W1209 04:36:38.949348 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:36:38.949354 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:36:38.949427 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:36:38.983440 1193189 cri.go:89] found id: ""
	I1209 04:36:38.983453 1193189 logs.go:282] 0 containers: []
	W1209 04:36:38.983472 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:36:38.983479 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:36:38.983548 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:36:39.016431 1193189 cri.go:89] found id: ""
	I1209 04:36:39.016445 1193189 logs.go:282] 0 containers: []
	W1209 04:36:39.016452 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:36:39.016460 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:36:39.016470 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:36:39.072919 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:36:39.072940 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:36:39.091632 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:36:39.091649 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:36:39.155205 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:36:39.147101   15734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:39.147530   15734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:39.149195   15734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:39.149594   15734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:39.151291   15734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:36:39.147101   15734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:39.147530   15734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:39.149195   15734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:39.149594   15734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:39.151291   15734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:36:39.155215 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:36:39.155237 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:36:39.217334 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:36:39.217354 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:36:41.745095 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:36:41.755765 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:36:41.755830 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:36:41.788789 1193189 cri.go:89] found id: ""
	I1209 04:36:41.788815 1193189 logs.go:282] 0 containers: []
	W1209 04:36:41.788821 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:36:41.788827 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:36:41.788905 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:36:41.818341 1193189 cri.go:89] found id: ""
	I1209 04:36:41.818363 1193189 logs.go:282] 0 containers: []
	W1209 04:36:41.818371 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:36:41.818376 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:36:41.818443 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:36:41.847734 1193189 cri.go:89] found id: ""
	I1209 04:36:41.847748 1193189 logs.go:282] 0 containers: []
	W1209 04:36:41.847754 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:36:41.847768 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:36:41.847827 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:36:41.871920 1193189 cri.go:89] found id: ""
	I1209 04:36:41.871943 1193189 logs.go:282] 0 containers: []
	W1209 04:36:41.871950 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:36:41.871955 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:36:41.872035 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:36:41.897849 1193189 cri.go:89] found id: ""
	I1209 04:36:41.897863 1193189 logs.go:282] 0 containers: []
	W1209 04:36:41.897870 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:36:41.897875 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:36:41.897936 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:36:41.923060 1193189 cri.go:89] found id: ""
	I1209 04:36:41.923083 1193189 logs.go:282] 0 containers: []
	W1209 04:36:41.923090 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:36:41.923096 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:36:41.923163 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:36:41.952660 1193189 cri.go:89] found id: ""
	I1209 04:36:41.952684 1193189 logs.go:282] 0 containers: []
	W1209 04:36:41.952692 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:36:41.952699 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:36:41.952709 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:36:42.023725 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:36:42.023763 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:36:42.042594 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:36:42.042613 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:36:42.123707 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:36:42.110165   15835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:42.110742   15835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:42.112376   15835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:42.113625   15835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:42.114588   15835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:36:42.110165   15835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:42.110742   15835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:42.112376   15835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:42.113625   15835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:42.114588   15835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:36:42.123742 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:36:42.123763 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:36:42.205354 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:36:42.205378 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:36:44.742230 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:36:44.752061 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:36:44.752130 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:36:44.777546 1193189 cri.go:89] found id: ""
	I1209 04:36:44.777560 1193189 logs.go:282] 0 containers: []
	W1209 04:36:44.777567 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:36:44.777573 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:36:44.777640 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:36:44.800656 1193189 cri.go:89] found id: ""
	I1209 04:36:44.800670 1193189 logs.go:282] 0 containers: []
	W1209 04:36:44.800677 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:36:44.800681 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:36:44.800746 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:36:44.823629 1193189 cri.go:89] found id: ""
	I1209 04:36:44.823643 1193189 logs.go:282] 0 containers: []
	W1209 04:36:44.823649 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:36:44.823654 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:36:44.823710 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:36:44.847779 1193189 cri.go:89] found id: ""
	I1209 04:36:44.847792 1193189 logs.go:282] 0 containers: []
	W1209 04:36:44.847799 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:36:44.847804 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:36:44.847864 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:36:44.871420 1193189 cri.go:89] found id: ""
	I1209 04:36:44.871434 1193189 logs.go:282] 0 containers: []
	W1209 04:36:44.871441 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:36:44.871446 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:36:44.871502 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:36:44.897429 1193189 cri.go:89] found id: ""
	I1209 04:36:44.897443 1193189 logs.go:282] 0 containers: []
	W1209 04:36:44.897450 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:36:44.897455 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:36:44.897515 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:36:44.921002 1193189 cri.go:89] found id: ""
	I1209 04:36:44.921016 1193189 logs.go:282] 0 containers: []
	W1209 04:36:44.921023 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:36:44.921030 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:36:44.921050 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:36:44.943906 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:36:44.943923 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:36:45.040267 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:36:45.023556   15936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:45.024395   15936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:45.026904   15936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:45.028346   15936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:45.029304   15936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:36:45.023556   15936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:45.024395   15936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:45.026904   15936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:45.028346   15936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:45.029304   15936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:36:45.040278 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:36:45.040290 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:36:45.111615 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:36:45.111641 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:36:45.154764 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:36:45.154783 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:36:47.737899 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:36:47.748114 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:36:47.748183 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:36:47.772307 1193189 cri.go:89] found id: ""
	I1209 04:36:47.772321 1193189 logs.go:282] 0 containers: []
	W1209 04:36:47.772327 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:36:47.772333 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:36:47.772392 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:36:47.796250 1193189 cri.go:89] found id: ""
	I1209 04:36:47.796264 1193189 logs.go:282] 0 containers: []
	W1209 04:36:47.796271 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:36:47.796276 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:36:47.796337 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:36:47.820196 1193189 cri.go:89] found id: ""
	I1209 04:36:47.820209 1193189 logs.go:282] 0 containers: []
	W1209 04:36:47.820217 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:36:47.820222 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:36:47.820279 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:36:47.844179 1193189 cri.go:89] found id: ""
	I1209 04:36:47.844193 1193189 logs.go:282] 0 containers: []
	W1209 04:36:47.844200 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:36:47.844205 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:36:47.844261 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:36:47.871664 1193189 cri.go:89] found id: ""
	I1209 04:36:47.871678 1193189 logs.go:282] 0 containers: []
	W1209 04:36:47.871685 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:36:47.871689 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:36:47.871746 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:36:47.897882 1193189 cri.go:89] found id: ""
	I1209 04:36:47.897896 1193189 logs.go:282] 0 containers: []
	W1209 04:36:47.897902 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:36:47.897907 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:36:47.897968 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:36:47.925663 1193189 cri.go:89] found id: ""
	I1209 04:36:47.925678 1193189 logs.go:282] 0 containers: []
	W1209 04:36:47.925684 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:36:47.925692 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:36:47.925702 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:36:47.982430 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:36:47.982448 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:36:48.003029 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:36:48.003046 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:36:48.081084 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:36:48.072200   16045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:48.073024   16045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:48.074652   16045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:48.075018   16045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:48.076580   16045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:36:48.072200   16045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:48.073024   16045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:48.074652   16045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:48.075018   16045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:48.076580   16045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:36:48.081095 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:36:48.081114 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:36:48.144865 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:36:48.144883 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:36:50.676655 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:36:50.687887 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:36:50.687948 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:36:50.712477 1193189 cri.go:89] found id: ""
	I1209 04:36:50.712492 1193189 logs.go:282] 0 containers: []
	W1209 04:36:50.712498 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:36:50.712504 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:36:50.712560 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:36:50.743459 1193189 cri.go:89] found id: ""
	I1209 04:36:50.743472 1193189 logs.go:282] 0 containers: []
	W1209 04:36:50.743479 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:36:50.743484 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:36:50.743559 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:36:50.769066 1193189 cri.go:89] found id: ""
	I1209 04:36:50.769080 1193189 logs.go:282] 0 containers: []
	W1209 04:36:50.769087 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:36:50.769093 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:36:50.769149 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:36:50.792910 1193189 cri.go:89] found id: ""
	I1209 04:36:50.792924 1193189 logs.go:282] 0 containers: []
	W1209 04:36:50.792931 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:36:50.792942 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:36:50.793002 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:36:50.817006 1193189 cri.go:89] found id: ""
	I1209 04:36:50.817020 1193189 logs.go:282] 0 containers: []
	W1209 04:36:50.817027 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:36:50.817033 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:36:50.817108 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:36:50.840981 1193189 cri.go:89] found id: ""
	I1209 04:36:50.840995 1193189 logs.go:282] 0 containers: []
	W1209 04:36:50.841002 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:36:50.841007 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:36:50.841065 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:36:50.864484 1193189 cri.go:89] found id: ""
	I1209 04:36:50.864498 1193189 logs.go:282] 0 containers: []
	W1209 04:36:50.864504 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:36:50.864512 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:36:50.864522 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:36:50.934409 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:36:50.923680   16138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:50.924264   16138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:50.925919   16138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:50.926350   16138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:50.927812   16138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:36:50.923680   16138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:50.924264   16138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:50.925919   16138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:50.926350   16138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:50.927812   16138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:36:50.934428 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:36:50.934439 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:36:51.007145 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:36:51.007168 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:36:51.035885 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:36:51.035901 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:36:51.094880 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:36:51.094903 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:36:53.613358 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:36:53.623300 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:36:53.623360 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:36:53.649605 1193189 cri.go:89] found id: ""
	I1209 04:36:53.649619 1193189 logs.go:282] 0 containers: []
	W1209 04:36:53.649625 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:36:53.649630 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:36:53.649688 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:36:53.673756 1193189 cri.go:89] found id: ""
	I1209 04:36:53.673771 1193189 logs.go:282] 0 containers: []
	W1209 04:36:53.673777 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:36:53.673782 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:36:53.673841 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:36:53.697312 1193189 cri.go:89] found id: ""
	I1209 04:36:53.697326 1193189 logs.go:282] 0 containers: []
	W1209 04:36:53.697333 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:36:53.697339 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:36:53.697405 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:36:53.721559 1193189 cri.go:89] found id: ""
	I1209 04:36:53.721573 1193189 logs.go:282] 0 containers: []
	W1209 04:36:53.721580 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:36:53.721585 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:36:53.721643 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:36:53.745640 1193189 cri.go:89] found id: ""
	I1209 04:36:53.745654 1193189 logs.go:282] 0 containers: []
	W1209 04:36:53.745661 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:36:53.745666 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:36:53.745724 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:36:53.770072 1193189 cri.go:89] found id: ""
	I1209 04:36:53.770086 1193189 logs.go:282] 0 containers: []
	W1209 04:36:53.770093 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:36:53.770099 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:36:53.770161 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:36:53.793834 1193189 cri.go:89] found id: ""
	I1209 04:36:53.793848 1193189 logs.go:282] 0 containers: []
	W1209 04:36:53.793856 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:36:53.793864 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:36:53.793873 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:36:53.853273 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:36:53.853293 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:36:53.870522 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:36:53.870539 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:36:53.937367 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:36:53.928497   16249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:53.929009   16249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:53.930701   16249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:53.931304   16249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:53.932870   16249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:36:53.928497   16249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:53.929009   16249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:53.930701   16249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:53.931304   16249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:53.932870   16249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:36:53.937377 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:36:53.937387 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:36:54.005219 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:36:54.005240 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:36:56.538809 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:36:56.548679 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:36:56.548738 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:36:56.572505 1193189 cri.go:89] found id: ""
	I1209 04:36:56.572519 1193189 logs.go:282] 0 containers: []
	W1209 04:36:56.572526 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:36:56.572531 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:36:56.572591 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:36:56.596732 1193189 cri.go:89] found id: ""
	I1209 04:36:56.596746 1193189 logs.go:282] 0 containers: []
	W1209 04:36:56.596753 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:36:56.596758 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:36:56.596817 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:36:56.622042 1193189 cri.go:89] found id: ""
	I1209 04:36:56.622056 1193189 logs.go:282] 0 containers: []
	W1209 04:36:56.622063 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:36:56.622068 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:36:56.622125 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:36:56.644865 1193189 cri.go:89] found id: ""
	I1209 04:36:56.644879 1193189 logs.go:282] 0 containers: []
	W1209 04:36:56.644885 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:36:56.644890 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:36:56.644947 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:36:56.670230 1193189 cri.go:89] found id: ""
	I1209 04:36:56.670244 1193189 logs.go:282] 0 containers: []
	W1209 04:36:56.670252 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:36:56.670257 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:36:56.670314 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:36:56.697566 1193189 cri.go:89] found id: ""
	I1209 04:36:56.697580 1193189 logs.go:282] 0 containers: []
	W1209 04:36:56.697586 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:36:56.697592 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:36:56.697650 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:36:56.726250 1193189 cri.go:89] found id: ""
	I1209 04:36:56.726264 1193189 logs.go:282] 0 containers: []
	W1209 04:36:56.726270 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:36:56.726278 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:36:56.726287 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:36:56.789536 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:36:56.789556 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:36:56.818317 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:36:56.818332 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:36:56.874653 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:36:56.874671 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:36:56.892967 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:36:56.892987 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:36:56.969870 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:36:56.961196   16367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:56.962227   16367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:56.964000   16367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:56.964364   16367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:56.965851   16367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:36:56.961196   16367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:56.962227   16367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:56.964000   16367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:56.964364   16367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:56.965851   16367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:36:59.470133 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:36:59.480193 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:36:59.480253 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:36:59.505288 1193189 cri.go:89] found id: ""
	I1209 04:36:59.505301 1193189 logs.go:282] 0 containers: []
	W1209 04:36:59.505308 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:36:59.505314 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:36:59.505375 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:36:59.530093 1193189 cri.go:89] found id: ""
	I1209 04:36:59.530108 1193189 logs.go:282] 0 containers: []
	W1209 04:36:59.530114 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:36:59.530120 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:36:59.530180 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:36:59.558857 1193189 cri.go:89] found id: ""
	I1209 04:36:59.558870 1193189 logs.go:282] 0 containers: []
	W1209 04:36:59.558877 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:36:59.558882 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:36:59.558939 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:36:59.587253 1193189 cri.go:89] found id: ""
	I1209 04:36:59.587267 1193189 logs.go:282] 0 containers: []
	W1209 04:36:59.587273 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:36:59.587278 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:36:59.587334 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:36:59.615574 1193189 cri.go:89] found id: ""
	I1209 04:36:59.615587 1193189 logs.go:282] 0 containers: []
	W1209 04:36:59.615594 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:36:59.615599 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:36:59.615661 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:36:59.640949 1193189 cri.go:89] found id: ""
	I1209 04:36:59.640963 1193189 logs.go:282] 0 containers: []
	W1209 04:36:59.640969 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:36:59.640975 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:36:59.641036 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:36:59.669059 1193189 cri.go:89] found id: ""
	I1209 04:36:59.669073 1193189 logs.go:282] 0 containers: []
	W1209 04:36:59.669079 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:36:59.669087 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:36:59.669099 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:36:59.728975 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:36:59.728993 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:36:59.746224 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:36:59.746240 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:36:59.811892 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:36:59.803565   16459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:59.804329   16459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:59.805884   16459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:59.806435   16459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:59.808154   16459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:36:59.803565   16459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:59.804329   16459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:59.805884   16459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:59.806435   16459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:36:59.808154   16459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:36:59.811908 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:36:59.811919 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:36:59.874287 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:36:59.874310 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:37:02.402643 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:37:02.413719 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:37:02.413785 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:37:02.440871 1193189 cri.go:89] found id: ""
	I1209 04:37:02.440885 1193189 logs.go:282] 0 containers: []
	W1209 04:37:02.440892 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:37:02.440897 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:37:02.440962 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:37:02.466112 1193189 cri.go:89] found id: ""
	I1209 04:37:02.466125 1193189 logs.go:282] 0 containers: []
	W1209 04:37:02.466132 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:37:02.466137 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:37:02.466195 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:37:02.491412 1193189 cri.go:89] found id: ""
	I1209 04:37:02.491426 1193189 logs.go:282] 0 containers: []
	W1209 04:37:02.491433 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:37:02.491438 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:37:02.491495 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:37:02.519036 1193189 cri.go:89] found id: ""
	I1209 04:37:02.519051 1193189 logs.go:282] 0 containers: []
	W1209 04:37:02.519058 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:37:02.519063 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:37:02.519126 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:37:02.547912 1193189 cri.go:89] found id: ""
	I1209 04:37:02.547927 1193189 logs.go:282] 0 containers: []
	W1209 04:37:02.547934 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:37:02.547939 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:37:02.548000 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:37:02.574804 1193189 cri.go:89] found id: ""
	I1209 04:37:02.574818 1193189 logs.go:282] 0 containers: []
	W1209 04:37:02.574826 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:37:02.574832 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:37:02.574910 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:37:02.598953 1193189 cri.go:89] found id: ""
	I1209 04:37:02.598967 1193189 logs.go:282] 0 containers: []
	W1209 04:37:02.598973 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:37:02.598981 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:37:02.598994 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:37:02.661273 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:37:02.661293 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:37:02.692376 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:37:02.692392 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:37:02.750097 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:37:02.750116 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:37:02.768673 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:37:02.768691 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:37:02.831464 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:37:02.822705   16576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:02.823490   16576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:02.825015   16576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:02.825561   16576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:02.827104   16576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:37:02.822705   16576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:02.823490   16576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:02.825015   16576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:02.825561   16576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:02.827104   16576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:37:05.331744 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:37:05.341534 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:37:05.341596 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:37:05.366255 1193189 cri.go:89] found id: ""
	I1209 04:37:05.366268 1193189 logs.go:282] 0 containers: []
	W1209 04:37:05.366275 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:37:05.366280 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:37:05.366339 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:37:05.391184 1193189 cri.go:89] found id: ""
	I1209 04:37:05.391198 1193189 logs.go:282] 0 containers: []
	W1209 04:37:05.391204 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:37:05.391211 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:37:05.391273 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:37:05.418240 1193189 cri.go:89] found id: ""
	I1209 04:37:05.418253 1193189 logs.go:282] 0 containers: []
	W1209 04:37:05.418259 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:37:05.418264 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:37:05.418327 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:37:05.442720 1193189 cri.go:89] found id: ""
	I1209 04:37:05.442734 1193189 logs.go:282] 0 containers: []
	W1209 04:37:05.442740 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:37:05.442746 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:37:05.442809 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:37:05.467915 1193189 cri.go:89] found id: ""
	I1209 04:37:05.467930 1193189 logs.go:282] 0 containers: []
	W1209 04:37:05.467937 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:37:05.467942 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:37:05.468009 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:37:05.491304 1193189 cri.go:89] found id: ""
	I1209 04:37:05.491318 1193189 logs.go:282] 0 containers: []
	W1209 04:37:05.491325 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:37:05.491330 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:37:05.491388 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:37:05.520597 1193189 cri.go:89] found id: ""
	I1209 04:37:05.520616 1193189 logs.go:282] 0 containers: []
	W1209 04:37:05.520623 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:37:05.520631 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:37:05.520642 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:37:05.577158 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:37:05.577177 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:37:05.593604 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:37:05.593620 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:37:05.661751 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:37:05.653767   16667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:05.654429   16667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:05.656081   16667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:05.656695   16667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:05.658088   16667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:37:05.653767   16667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:05.654429   16667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:05.656081   16667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:05.656695   16667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:05.658088   16667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:37:05.661761 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:37:05.661771 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:37:05.729846 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:37:05.729866 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:37:08.257598 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:37:08.267457 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:37:08.267520 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:37:08.295093 1193189 cri.go:89] found id: ""
	I1209 04:37:08.295107 1193189 logs.go:282] 0 containers: []
	W1209 04:37:08.295114 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:37:08.295119 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:37:08.295181 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:37:08.320140 1193189 cri.go:89] found id: ""
	I1209 04:37:08.320153 1193189 logs.go:282] 0 containers: []
	W1209 04:37:08.320160 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:37:08.320165 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:37:08.320233 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:37:08.344055 1193189 cri.go:89] found id: ""
	I1209 04:37:08.344069 1193189 logs.go:282] 0 containers: []
	W1209 04:37:08.344075 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:37:08.344081 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:37:08.344141 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:37:08.372791 1193189 cri.go:89] found id: ""
	I1209 04:37:08.372805 1193189 logs.go:282] 0 containers: []
	W1209 04:37:08.372811 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:37:08.372816 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:37:08.372874 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:37:08.396162 1193189 cri.go:89] found id: ""
	I1209 04:37:08.396175 1193189 logs.go:282] 0 containers: []
	W1209 04:37:08.396182 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:37:08.396187 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:37:08.396245 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:37:08.420733 1193189 cri.go:89] found id: ""
	I1209 04:37:08.420747 1193189 logs.go:282] 0 containers: []
	W1209 04:37:08.420755 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:37:08.420769 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:37:08.420830 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:37:08.444879 1193189 cri.go:89] found id: ""
	I1209 04:37:08.444894 1193189 logs.go:282] 0 containers: []
	W1209 04:37:08.444900 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:37:08.444918 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:37:08.444929 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:37:08.508132 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:37:08.499420   16764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:08.499882   16764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:08.501619   16764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:08.502150   16764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:08.503673   16764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:37:08.499420   16764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:08.499882   16764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:08.501619   16764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:08.502150   16764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:08.503673   16764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:37:08.508143 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:37:08.508156 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:37:08.570875 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:37:08.570900 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:37:08.602018 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:37:08.602034 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:37:08.663156 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:37:08.663174 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:37:11.180415 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:37:11.191088 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:37:11.191148 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:37:11.218679 1193189 cri.go:89] found id: ""
	I1209 04:37:11.218696 1193189 logs.go:282] 0 containers: []
	W1209 04:37:11.218703 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:37:11.218708 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:37:11.218766 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:37:11.253810 1193189 cri.go:89] found id: ""
	I1209 04:37:11.253842 1193189 logs.go:282] 0 containers: []
	W1209 04:37:11.253849 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:37:11.253855 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:37:11.253925 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:37:11.279585 1193189 cri.go:89] found id: ""
	I1209 04:37:11.279599 1193189 logs.go:282] 0 containers: []
	W1209 04:37:11.279605 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:37:11.279610 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:37:11.279668 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:37:11.303733 1193189 cri.go:89] found id: ""
	I1209 04:37:11.303747 1193189 logs.go:282] 0 containers: []
	W1209 04:37:11.303754 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:37:11.303759 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:37:11.303818 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:37:11.328678 1193189 cri.go:89] found id: ""
	I1209 04:37:11.328692 1193189 logs.go:282] 0 containers: []
	W1209 04:37:11.328699 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:37:11.328710 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:37:11.328768 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:37:11.352807 1193189 cri.go:89] found id: ""
	I1209 04:37:11.352830 1193189 logs.go:282] 0 containers: []
	W1209 04:37:11.352838 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:37:11.352843 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:37:11.352904 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:37:11.380926 1193189 cri.go:89] found id: ""
	I1209 04:37:11.380940 1193189 logs.go:282] 0 containers: []
	W1209 04:37:11.380946 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:37:11.380954 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:37:11.380964 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:37:11.443730 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:37:11.443751 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:37:11.471147 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:37:11.471163 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:37:11.528045 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:37:11.528068 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:37:11.545822 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:37:11.545839 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:37:11.612652 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:37:11.604231   16889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:11.604891   16889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:11.606570   16889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:11.607169   16889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:11.608878   16889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:37:11.604231   16889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:11.604891   16889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:11.606570   16889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:11.607169   16889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:11.608878   16889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:37:14.112937 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:37:14.123734 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:37:14.123791 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:37:14.149868 1193189 cri.go:89] found id: ""
	I1209 04:37:14.149884 1193189 logs.go:282] 0 containers: []
	W1209 04:37:14.149891 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:37:14.149897 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:37:14.149957 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:37:14.175575 1193189 cri.go:89] found id: ""
	I1209 04:37:14.175589 1193189 logs.go:282] 0 containers: []
	W1209 04:37:14.175595 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:37:14.175601 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:37:14.175665 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:37:14.202589 1193189 cri.go:89] found id: ""
	I1209 04:37:14.202615 1193189 logs.go:282] 0 containers: []
	W1209 04:37:14.202621 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:37:14.202627 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:37:14.202707 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:37:14.229085 1193189 cri.go:89] found id: ""
	I1209 04:37:14.229099 1193189 logs.go:282] 0 containers: []
	W1209 04:37:14.229109 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:37:14.229117 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:37:14.229183 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:37:14.254508 1193189 cri.go:89] found id: ""
	I1209 04:37:14.254522 1193189 logs.go:282] 0 containers: []
	W1209 04:37:14.254529 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:37:14.254534 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:37:14.254626 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:37:14.282967 1193189 cri.go:89] found id: ""
	I1209 04:37:14.282990 1193189 logs.go:282] 0 containers: []
	W1209 04:37:14.282997 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:37:14.283003 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:37:14.283072 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:37:14.307959 1193189 cri.go:89] found id: ""
	I1209 04:37:14.307973 1193189 logs.go:282] 0 containers: []
	W1209 04:37:14.307980 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:37:14.307988 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:37:14.307998 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:37:14.337297 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:37:14.337312 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:37:14.393504 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:37:14.393523 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:37:14.411720 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:37:14.411736 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:37:14.476754 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:37:14.469112   16989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:14.469506   16989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:14.470955   16989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:14.471259   16989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:14.472758   16989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:37:14.469112   16989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:14.469506   16989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:14.470955   16989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:14.471259   16989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:14.472758   16989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:37:14.476764 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:37:14.476775 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:37:17.039773 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:37:17.050019 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:37:17.050078 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:37:17.074811 1193189 cri.go:89] found id: ""
	I1209 04:37:17.074825 1193189 logs.go:282] 0 containers: []
	W1209 04:37:17.074841 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:37:17.074847 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:37:17.074928 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:37:17.098749 1193189 cri.go:89] found id: ""
	I1209 04:37:17.098763 1193189 logs.go:282] 0 containers: []
	W1209 04:37:17.098779 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:37:17.098784 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:37:17.098851 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:37:17.123314 1193189 cri.go:89] found id: ""
	I1209 04:37:17.123328 1193189 logs.go:282] 0 containers: []
	W1209 04:37:17.123334 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:37:17.123348 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:37:17.123404 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:37:17.148281 1193189 cri.go:89] found id: ""
	I1209 04:37:17.148304 1193189 logs.go:282] 0 containers: []
	W1209 04:37:17.148314 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:37:17.148319 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:37:17.148386 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:37:17.178459 1193189 cri.go:89] found id: ""
	I1209 04:37:17.178473 1193189 logs.go:282] 0 containers: []
	W1209 04:37:17.178480 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:37:17.178487 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:37:17.178545 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:37:17.214370 1193189 cri.go:89] found id: ""
	I1209 04:37:17.214383 1193189 logs.go:282] 0 containers: []
	W1209 04:37:17.214390 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:37:17.214395 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:37:17.214455 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:37:17.241547 1193189 cri.go:89] found id: ""
	I1209 04:37:17.241560 1193189 logs.go:282] 0 containers: []
	W1209 04:37:17.241567 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:37:17.241574 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:37:17.241584 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:37:17.300902 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:37:17.300920 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:37:17.318244 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:37:17.318260 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:37:17.379838 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:37:17.371574   17083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:17.372258   17083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:17.373943   17083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:17.374513   17083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:17.376103   17083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:37:17.371574   17083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:17.372258   17083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:17.373943   17083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:17.374513   17083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:37:17.376103   17083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:37:17.379865 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:37:17.379875 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:37:17.442204 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:37:17.442227 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:37:19.972933 1193189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:37:19.982835 1193189 kubeadm.go:602] duration metric: took 4m3.833613801s to restartPrimaryControlPlane
	W1209 04:37:19.982896 1193189 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1209 04:37:19.982967 1193189 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1209 04:37:20.394224 1193189 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 04:37:20.407222 1193189 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 04:37:20.415043 1193189 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1209 04:37:20.415096 1193189 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 04:37:20.422447 1193189 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 04:37:20.422458 1193189 kubeadm.go:158] found existing configuration files:
	
	I1209 04:37:20.422511 1193189 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1209 04:37:20.429958 1193189 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 04:37:20.430020 1193189 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 04:37:20.437087 1193189 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1209 04:37:20.444177 1193189 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 04:37:20.444229 1193189 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 04:37:20.451583 1193189 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1209 04:37:20.459107 1193189 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 04:37:20.459158 1193189 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 04:37:20.466013 1193189 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1209 04:37:20.473265 1193189 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 04:37:20.473320 1193189 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 04:37:20.480362 1193189 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1209 04:37:20.591599 1193189 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1209 04:37:20.592032 1193189 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1209 04:37:20.651935 1193189 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1209 04:41:22.764150 1193189 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1209 04:41:22.764175 1193189 kubeadm.go:319] 
	I1209 04:41:22.764241 1193189 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1209 04:41:22.768309 1193189 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1209 04:41:22.768359 1193189 kubeadm.go:319] [preflight] Running pre-flight checks
	I1209 04:41:22.768442 1193189 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1209 04:41:22.768497 1193189 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1209 04:41:22.768531 1193189 kubeadm.go:319] OS: Linux
	I1209 04:41:22.768594 1193189 kubeadm.go:319] CGROUPS_CPU: enabled
	I1209 04:41:22.768653 1193189 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1209 04:41:22.768699 1193189 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1209 04:41:22.768746 1193189 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1209 04:41:22.768792 1193189 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1209 04:41:22.768840 1193189 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1209 04:41:22.768883 1193189 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1209 04:41:22.768930 1193189 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1209 04:41:22.768975 1193189 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1209 04:41:22.769046 1193189 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1209 04:41:22.769140 1193189 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1209 04:41:22.769229 1193189 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1209 04:41:22.769290 1193189 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1209 04:41:22.772269 1193189 out.go:252]   - Generating certificates and keys ...
	I1209 04:41:22.772365 1193189 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1209 04:41:22.772442 1193189 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1209 04:41:22.772517 1193189 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1209 04:41:22.772582 1193189 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1209 04:41:22.772651 1193189 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1209 04:41:22.772740 1193189 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1209 04:41:22.772808 1193189 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1209 04:41:22.772883 1193189 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1209 04:41:22.772975 1193189 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1209 04:41:22.773069 1193189 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1209 04:41:22.773105 1193189 kubeadm.go:319] [certs] Using the existing "sa" key
	I1209 04:41:22.773160 1193189 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1209 04:41:22.773215 1193189 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1209 04:41:22.773279 1193189 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1209 04:41:22.773333 1193189 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1209 04:41:22.773401 1193189 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1209 04:41:22.773459 1193189 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1209 04:41:22.773544 1193189 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1209 04:41:22.773604 1193189 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1209 04:41:22.778452 1193189 out.go:252]   - Booting up control plane ...
	I1209 04:41:22.778558 1193189 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1209 04:41:22.778636 1193189 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1209 04:41:22.778708 1193189 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1209 04:41:22.778830 1193189 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1209 04:41:22.778931 1193189 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1209 04:41:22.779034 1193189 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1209 04:41:22.779165 1193189 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1209 04:41:22.779213 1193189 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1209 04:41:22.779347 1193189 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1209 04:41:22.779447 1193189 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1209 04:41:22.779507 1193189 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001187798s
	I1209 04:41:22.779509 1193189 kubeadm.go:319] 
	I1209 04:41:22.779562 1193189 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1209 04:41:22.779605 1193189 kubeadm.go:319] 	- The kubelet is not running
	I1209 04:41:22.779728 1193189 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1209 04:41:22.779731 1193189 kubeadm.go:319] 
	I1209 04:41:22.779842 1193189 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1209 04:41:22.779891 1193189 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1209 04:41:22.779919 1193189 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1209 04:41:22.779932 1193189 kubeadm.go:319] 
	W1209 04:41:22.780053 1193189 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001187798s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1209 04:41:22.780164 1193189 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1209 04:41:23.192047 1193189 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 04:41:23.205020 1193189 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1209 04:41:23.205076 1193189 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 04:41:23.212555 1193189 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 04:41:23.212563 1193189 kubeadm.go:158] found existing configuration files:
	
	I1209 04:41:23.212616 1193189 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1209 04:41:23.220135 1193189 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 04:41:23.220190 1193189 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 04:41:23.227342 1193189 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1209 04:41:23.234934 1193189 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 04:41:23.234988 1193189 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 04:41:23.242413 1193189 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1209 04:41:23.249859 1193189 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 04:41:23.249916 1193189 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 04:41:23.257497 1193189 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1209 04:41:23.264938 1193189 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 04:41:23.264993 1193189 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 04:41:23.272287 1193189 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1209 04:41:23.315971 1193189 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1209 04:41:23.316329 1193189 kubeadm.go:319] [preflight] Running pre-flight checks
	I1209 04:41:23.386479 1193189 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1209 04:41:23.386543 1193189 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1209 04:41:23.386577 1193189 kubeadm.go:319] OS: Linux
	I1209 04:41:23.386622 1193189 kubeadm.go:319] CGROUPS_CPU: enabled
	I1209 04:41:23.386669 1193189 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1209 04:41:23.386716 1193189 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1209 04:41:23.386763 1193189 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1209 04:41:23.386810 1193189 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1209 04:41:23.386857 1193189 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1209 04:41:23.386901 1193189 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1209 04:41:23.386948 1193189 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1209 04:41:23.386993 1193189 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1209 04:41:23.459528 1193189 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1209 04:41:23.459630 1193189 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1209 04:41:23.459719 1193189 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1209 04:41:23.465017 1193189 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1209 04:41:23.470401 1193189 out.go:252]   - Generating certificates and keys ...
	I1209 04:41:23.470490 1193189 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1209 04:41:23.470556 1193189 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1209 04:41:23.470655 1193189 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1209 04:41:23.470730 1193189 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1209 04:41:23.470799 1193189 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1209 04:41:23.470852 1193189 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1209 04:41:23.470919 1193189 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1209 04:41:23.470980 1193189 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1209 04:41:23.471052 1193189 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1209 04:41:23.471123 1193189 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1209 04:41:23.471160 1193189 kubeadm.go:319] [certs] Using the existing "sa" key
	I1209 04:41:23.471222 1193189 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1209 04:41:23.897547 1193189 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1209 04:41:24.071180 1193189 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1209 04:41:24.419266 1193189 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1209 04:41:24.580042 1193189 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1209 04:41:25.012112 1193189 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1209 04:41:25.012658 1193189 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1209 04:41:25.015310 1193189 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1209 04:41:25.018776 1193189 out.go:252]   - Booting up control plane ...
	I1209 04:41:25.018875 1193189 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1209 04:41:25.018952 1193189 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1209 04:41:25.019019 1193189 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1209 04:41:25.039820 1193189 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1209 04:41:25.039928 1193189 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1209 04:41:25.047252 1193189 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1209 04:41:25.047955 1193189 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1209 04:41:25.048349 1193189 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1209 04:41:25.184171 1193189 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1209 04:41:25.184286 1193189 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1209 04:45:25.184394 1193189 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000314916s
	I1209 04:45:25.184418 1193189 kubeadm.go:319] 
	I1209 04:45:25.184509 1193189 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1209 04:45:25.184553 1193189 kubeadm.go:319] 	- The kubelet is not running
	I1209 04:45:25.184657 1193189 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1209 04:45:25.184661 1193189 kubeadm.go:319] 
	I1209 04:45:25.184765 1193189 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1209 04:45:25.184796 1193189 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1209 04:45:25.184826 1193189 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1209 04:45:25.184829 1193189 kubeadm.go:319] 
	I1209 04:45:25.188658 1193189 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1209 04:45:25.189080 1193189 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1209 04:45:25.189188 1193189 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1209 04:45:25.189440 1193189 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1209 04:45:25.189444 1193189 kubeadm.go:319] 
	I1209 04:45:25.189512 1193189 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1209 04:45:25.189563 1193189 kubeadm.go:403] duration metric: took 12m9.073031305s to StartCluster
	I1209 04:45:25.189594 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:45:25.189654 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:45:25.214653 1193189 cri.go:89] found id: ""
	I1209 04:45:25.214667 1193189 logs.go:282] 0 containers: []
	W1209 04:45:25.214674 1193189 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:45:25.214680 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 04:45:25.214745 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:45:25.239781 1193189 cri.go:89] found id: ""
	I1209 04:45:25.239795 1193189 logs.go:282] 0 containers: []
	W1209 04:45:25.239802 1193189 logs.go:284] No container was found matching "etcd"
	I1209 04:45:25.239806 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 04:45:25.239865 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:45:25.263923 1193189 cri.go:89] found id: ""
	I1209 04:45:25.263937 1193189 logs.go:282] 0 containers: []
	W1209 04:45:25.263943 1193189 logs.go:284] No container was found matching "coredns"
	I1209 04:45:25.263949 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:45:25.264009 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:45:25.289497 1193189 cri.go:89] found id: ""
	I1209 04:45:25.289510 1193189 logs.go:282] 0 containers: []
	W1209 04:45:25.289521 1193189 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:45:25.289527 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:45:25.289587 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:45:25.314477 1193189 cri.go:89] found id: ""
	I1209 04:45:25.314491 1193189 logs.go:282] 0 containers: []
	W1209 04:45:25.314497 1193189 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:45:25.314502 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:45:25.314564 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:45:25.343027 1193189 cri.go:89] found id: ""
	I1209 04:45:25.343041 1193189 logs.go:282] 0 containers: []
	W1209 04:45:25.343048 1193189 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:45:25.343054 1193189 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 04:45:25.343116 1193189 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:45:25.372137 1193189 cri.go:89] found id: ""
	I1209 04:45:25.372151 1193189 logs.go:282] 0 containers: []
	W1209 04:45:25.372158 1193189 logs.go:284] No container was found matching "kindnet"
	I1209 04:45:25.372166 1193189 logs.go:123] Gathering logs for kubelet ...
	I1209 04:45:25.372175 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:45:25.430985 1193189 logs.go:123] Gathering logs for dmesg ...
	I1209 04:45:25.431004 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:45:25.448709 1193189 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:45:25.448726 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:45:25.515693 1193189 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:45:25.506884   20840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:25.507687   20840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:25.509338   20840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:25.509652   20840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:25.511142   20840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:45:25.506884   20840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:25.507687   20840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:25.509338   20840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:25.509652   20840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:25.511142   20840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:45:25.515704 1193189 logs.go:123] Gathering logs for containerd ...
	I1209 04:45:25.515716 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 04:45:25.578666 1193189 logs.go:123] Gathering logs for container status ...
	I1209 04:45:25.578686 1193189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1209 04:45:25.609638 1193189 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000314916s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1209 04:45:25.609683 1193189 out.go:285] * 
	W1209 04:45:25.609743 1193189 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000314916s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1209 04:45:25.609756 1193189 out.go:285] * 
	W1209 04:45:25.611848 1193189 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 04:45:25.617063 1193189 out.go:203] 
	W1209 04:45:25.620790 1193189 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000314916s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1209 04:45:25.620840 1193189 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1209 04:45:25.620858 1193189 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1209 04:45:25.624102 1193189 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 09 04:45:34 functional-667319 containerd[9667]: time="2025-12-09T04:45:34.304901430Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-667319\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 09 04:45:35 functional-667319 containerd[9667]: time="2025-12-09T04:45:35.122240132Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-667319\""
	Dec 09 04:45:35 functional-667319 containerd[9667]: time="2025-12-09T04:45:35.124969737Z" level=info msg="ImageDelete event name:\"docker.io/kicbase/echo-server:functional-667319\""
	Dec 09 04:45:35 functional-667319 containerd[9667]: time="2025-12-09T04:45:35.127379711Z" level=info msg="ImageDelete event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\""
	Dec 09 04:45:35 functional-667319 containerd[9667]: time="2025-12-09T04:45:35.136337700Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-667319\" returns successfully"
	Dec 09 04:45:35 functional-667319 containerd[9667]: time="2025-12-09T04:45:35.369054172Z" level=info msg="No images store for sha256:dd3309dec5df27eec01ab59220514c77e78d9b5409234aefaeee1c6a1c609658"
	Dec 09 04:45:35 functional-667319 containerd[9667]: time="2025-12-09T04:45:35.371319041Z" level=info msg="ImageCreate event name:\"docker.io/kicbase/echo-server:functional-667319\""
	Dec 09 04:45:35 functional-667319 containerd[9667]: time="2025-12-09T04:45:35.378181758Z" level=info msg="ImageCreate event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 09 04:45:35 functional-667319 containerd[9667]: time="2025-12-09T04:45:35.378777478Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-667319\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 09 04:45:36 functional-667319 containerd[9667]: time="2025-12-09T04:45:36.438263871Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-667319\""
	Dec 09 04:45:36 functional-667319 containerd[9667]: time="2025-12-09T04:45:36.441243432Z" level=info msg="ImageDelete event name:\"docker.io/kicbase/echo-server:functional-667319\""
	Dec 09 04:45:36 functional-667319 containerd[9667]: time="2025-12-09T04:45:36.443329590Z" level=info msg="ImageDelete event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\""
	Dec 09 04:45:36 functional-667319 containerd[9667]: time="2025-12-09T04:45:36.451953736Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-667319\" returns successfully"
	Dec 09 04:45:36 functional-667319 containerd[9667]: time="2025-12-09T04:45:36.699722091Z" level=info msg="No images store for sha256:dd3309dec5df27eec01ab59220514c77e78d9b5409234aefaeee1c6a1c609658"
	Dec 09 04:45:36 functional-667319 containerd[9667]: time="2025-12-09T04:45:36.702120561Z" level=info msg="ImageCreate event name:\"docker.io/kicbase/echo-server:functional-667319\""
	Dec 09 04:45:36 functional-667319 containerd[9667]: time="2025-12-09T04:45:36.709091689Z" level=info msg="ImageCreate event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 09 04:45:36 functional-667319 containerd[9667]: time="2025-12-09T04:45:36.709423744Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-667319\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 09 04:45:37 functional-667319 containerd[9667]: time="2025-12-09T04:45:37.473128393Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-667319\""
	Dec 09 04:45:37 functional-667319 containerd[9667]: time="2025-12-09T04:45:37.475631173Z" level=info msg="ImageDelete event name:\"docker.io/kicbase/echo-server:functional-667319\""
	Dec 09 04:45:37 functional-667319 containerd[9667]: time="2025-12-09T04:45:37.477592221Z" level=info msg="ImageDelete event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\""
	Dec 09 04:45:37 functional-667319 containerd[9667]: time="2025-12-09T04:45:37.490276659Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-667319\" returns successfully"
	Dec 09 04:45:38 functional-667319 containerd[9667]: time="2025-12-09T04:45:38.148598616Z" level=info msg="No images store for sha256:904ceb29077e75bbca4483a04b0d4e97cdb7c2e3a6b6f3f1bb70ace08229b0b3"
	Dec 09 04:45:38 functional-667319 containerd[9667]: time="2025-12-09T04:45:38.150763877Z" level=info msg="ImageCreate event name:\"docker.io/kicbase/echo-server:functional-667319\""
	Dec 09 04:45:38 functional-667319 containerd[9667]: time="2025-12-09T04:45:38.160850013Z" level=info msg="ImageCreate event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 09 04:45:38 functional-667319 containerd[9667]: time="2025-12-09T04:45:38.161468872Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-667319\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:45:40.024190   21863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:40.025061   21863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:40.026977   21863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:40.027578   21863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:40.029699   21863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 9 03:13] overlayfs: idmapped layers are currently not supported
	[ +25.904254] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:14] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:16] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:18] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:19] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:30] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:32] overlayfs: idmapped layers are currently not supported
	[ +28.114653] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:33] overlayfs: idmapped layers are currently not supported
	[ +23.720849] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:34] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:35] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:36] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:37] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:38] overlayfs: idmapped layers are currently not supported
	[ +23.656275] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:39] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:57] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:58] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:00] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:02] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:03] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:05] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:06] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 04:45:40 up  7:27,  0 user,  load average: 0.33, 0.23, 0.48
	Linux functional-667319 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 09 04:45:36 functional-667319 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 09 04:45:37 functional-667319 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 336.
	Dec 09 04:45:37 functional-667319 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:45:37 functional-667319 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:45:37 functional-667319 kubelet[21646]: E1209 04:45:37.504418   21646 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 09 04:45:37 functional-667319 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 09 04:45:37 functional-667319 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 09 04:45:38 functional-667319 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 337.
	Dec 09 04:45:38 functional-667319 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:45:38 functional-667319 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:45:38 functional-667319 kubelet[21705]: E1209 04:45:38.208878   21705 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 09 04:45:38 functional-667319 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 09 04:45:38 functional-667319 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 09 04:45:38 functional-667319 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 338.
	Dec 09 04:45:38 functional-667319 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:45:38 functional-667319 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:45:38 functional-667319 kubelet[21757]: E1209 04:45:38.994447   21757 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 09 04:45:38 functional-667319 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 09 04:45:38 functional-667319 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 09 04:45:39 functional-667319 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 339.
	Dec 09 04:45:39 functional-667319 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:45:39 functional-667319 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:45:39 functional-667319 kubelet[21785]: E1209 04:45:39.739231   21785 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 09 04:45:39 functional-667319 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 09 04:45:39 functional-667319 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-667319 -n functional-667319
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-667319 -n functional-667319: exit status 2 (340.763558ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-667319" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (2.18s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.57s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-667319 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-667319 tunnel --alsologtostderr]
functional_test_tunnel_test.go:190: tunnel command failed with unexpected error: exit code 103. stderr: I1209 04:45:32.301864 1205807 out.go:360] Setting OutFile to fd 1 ...
I1209 04:45:32.310118 1205807 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1209 04:45:32.310184 1205807 out.go:374] Setting ErrFile to fd 2...
I1209 04:45:32.310205 1205807 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1209 04:45:32.310678 1205807 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-1142328/.minikube/bin
I1209 04:45:32.312500 1205807 mustload.go:66] Loading cluster: functional-667319
I1209 04:45:32.314642 1205807 config.go:182] Loaded profile config "functional-667319": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1209 04:45:32.315446 1205807 cli_runner.go:164] Run: docker container inspect functional-667319 --format={{.State.Status}}
I1209 04:45:32.336627 1205807 host.go:66] Checking if "functional-667319" exists ...
I1209 04:45:32.336963 1205807 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1209 04:45:32.436252 1205807 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-09 04:45:32.420552383 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1209 04:45:32.436376 1205807 api_server.go:166] Checking apiserver status ...
I1209 04:45:32.436444 1205807 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1209 04:45:32.436485 1205807 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-667319
I1209 04:45:32.486468 1205807 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/functional-667319/id_rsa Username:docker}
W1209 04:45:32.616227 1205807 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1209 04:45:32.621557 1205807 out.go:179] * The control-plane node functional-667319 apiserver is not running: (state=Stopped)
I1209 04:45:32.625235 1205807 out.go:179]   To start a cluster, run: "minikube start -p functional-667319"

                                                
                                                
stdout: * The control-plane node functional-667319 apiserver is not running: (state=Stopped)
To start a cluster, run: "minikube start -p functional-667319"
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-667319 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-667319 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-667319 tunnel --alsologtostderr] stderr:
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-667319 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 1205806: os: process already finished
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-667319 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-667319 tunnel --alsologtostderr] stderr:
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.57s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup (0.09s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-667319 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:212: (dbg) Non-zero exit: kubectl --context functional-667319 apply -f testdata/testsvc.yaml: exit status 1 (84.653328ms)

                                                
                                                
** stderr ** 
	error: error validating "testdata/testsvc.yaml": error validating data: failed to download openapi: Get "https://192.168.49.2:8441/openapi/v2?timeout=32s": dial tcp 192.168.49.2:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:214: kubectl --context functional-667319 apply -f testdata/testsvc.yaml failed: exit status 1
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup (0.09s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (111.35s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:288: failed to hit nginx at "http://10.98.178.96": Temporary Error: Get "http://10.98.178.96": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-667319 get svc nginx-svc
functional_test_tunnel_test.go:290: (dbg) Non-zero exit: kubectl --context functional-667319 get svc nginx-svc: exit status 1 (64.723805ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:292: kubectl --context functional-667319 get svc nginx-svc failed: exit status 1
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (111.35s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (2.39s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-667319 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2835554719/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1765255545185688072" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2835554719/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1765255545185688072" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2835554719/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1765255545185688072" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2835554719/001/test-1765255545185688072
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-667319 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-667319 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (401.353301ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1209 04:45:45.587389 1144231 retry.go:31] will retry after 425.930358ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-667319 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-667319 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec  9 04:45 created-by-test
-rw-r--r-- 1 docker docker 24 Dec  9 04:45 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec  9 04:45 test-1765255545185688072
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-667319 ssh cat /mount-9p/test-1765255545185688072
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-667319 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:148: (dbg) Non-zero exit: kubectl --context functional-667319 replace --force -f testdata/busybox-mount-test.yaml: exit status 1 (54.423526ms)

                                                
                                                
** stderr ** 
	error: error when deleting "testdata/busybox-mount-test.yaml": Delete "https://192.168.49.2:8441/api/v1/namespaces/default/pods/busybox-mount": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test_mount_test.go:150: failed to 'kubectl replace' for busybox-mount-test. args "kubectl --context functional-667319 replace --force -f testdata/busybox-mount-test.yaml" : exit status 1
functional_test_mount_test.go:80: "TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port" failed, getting debug info...
functional_test_mount_test.go:81: (dbg) Run:  out/minikube-linux-arm64 -p functional-667319 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates"
functional_test_mount_test.go:81: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-667319 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates": exit status 1 (272.101354ms)

                                                
                                                
-- stdout --
	192.168.49.1 on /mount-9p type 9p (rw,relatime,sync,dirsync,dfltuid=1000,dfltgid=997,access=any,msize=262144,trans=tcp,noextend,port=34203)
	total 2
	-rw-r--r-- 1 docker docker 24 Dec  9 04:45 created-by-test
	-rw-r--r-- 1 docker docker 24 Dec  9 04:45 created-by-test-removed-by-pod
	-rw-r--r-- 1 docker docker 24 Dec  9 04:45 test-1765255545185688072
	cat: /mount-9p/pod-dates: No such file or directory

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:83: debugging command "out/minikube-linux-arm64 -p functional-667319 ssh \"mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates\"" failed : exit status 1
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-667319 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-667319 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2835554719/001:/mount-9p --alsologtostderr -v=1] ...
functional_test_mount_test.go:94: (dbg) [out/minikube-linux-arm64 mount -p functional-667319 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2835554719/001:/mount-9p --alsologtostderr -v=1] stdout:
* Mounting host path /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2835554719/001 into VM as /mount-9p ...
- Mount type:   9p
- User ID:      docker
- Group ID:     docker
- Version:      9p2000.L
- Message Size: 262144
- Options:      map[]
- Bind Address: 192.168.49.1:34203
* Userspace file server: 
ufs starting
* Successfully mounted /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2835554719/001 to /mount-9p

                                                
                                                
* NOTE: This process must stay alive for the mount to be accessible ...
* Unmounting /mount-9p ...

                                                
                                                

                                                
                                                
functional_test_mount_test.go:94: (dbg) [out/minikube-linux-arm64 mount -p functional-667319 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2835554719/001:/mount-9p --alsologtostderr -v=1] stderr:
I1209 04:45:45.271709 1208082 out.go:360] Setting OutFile to fd 1 ...
I1209 04:45:45.271965 1208082 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1209 04:45:45.271993 1208082 out.go:374] Setting ErrFile to fd 2...
I1209 04:45:45.272050 1208082 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1209 04:45:45.274385 1208082 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-1142328/.minikube/bin
I1209 04:45:45.274860 1208082 mustload.go:66] Loading cluster: functional-667319
I1209 04:45:45.275290 1208082 config.go:182] Loaded profile config "functional-667319": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1209 04:45:45.275879 1208082 cli_runner.go:164] Run: docker container inspect functional-667319 --format={{.State.Status}}
I1209 04:45:45.312978 1208082 host.go:66] Checking if "functional-667319" exists ...
I1209 04:45:45.313337 1208082 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1209 04:45:45.399894 1208082 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-09 04:45:45.387711384 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1209 04:45:45.400090 1208082 cli_runner.go:164] Run: docker network inspect functional-667319 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1209 04:45:45.446370 1208082 out.go:179] * Mounting host path /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2835554719/001 into VM as /mount-9p ...
I1209 04:45:45.449574 1208082 out.go:179]   - Mount type:   9p
I1209 04:45:45.452810 1208082 out.go:179]   - User ID:      docker
I1209 04:45:45.455822 1208082 out.go:179]   - Group ID:     docker
I1209 04:45:45.458759 1208082 out.go:179]   - Version:      9p2000.L
I1209 04:45:45.461739 1208082 out.go:179]   - Message Size: 262144
I1209 04:45:45.464621 1208082 out.go:179]   - Options:      map[]
I1209 04:45:45.467483 1208082 out.go:179]   - Bind Address: 192.168.49.1:34203
I1209 04:45:45.470393 1208082 out.go:179] * Userspace file server: 
I1209 04:45:45.473132 1208082 ssh_runner.go:195] Run: /bin/bash -c "[ "x$(findmnt -T /mount-9p | grep /mount-9p)" != "x" ] && sudo umount -f -l /mount-9p || echo "
I1209 04:45:45.473252 1208082 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-667319
I1209 04:45:45.493331 1208082 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/functional-667319/id_rsa Username:docker}
I1209 04:45:45.602553 1208082 mount.go:180] unmount for /mount-9p ran successfully
I1209 04:45:45.602581 1208082 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /mount-9p"
I1209 04:45:45.610922 1208082 ssh_runner.go:195] Run: /bin/bash -c "sudo mount -t 9p -o dfltgid=$(grep ^docker: /etc/group | cut -d: -f3),dfltuid=$(id -u docker),msize=262144,port=34203,trans=tcp,version=9p2000.L 192.168.49.1 /mount-9p"
I1209 04:45:45.621426 1208082 main.go:127] stdlog: ufs.go:141 connected
I1209 04:45:45.621602 1208082 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:37642 Tversion tag 65535 msize 262144 version '9P2000.L'
I1209 04:45:45.621652 1208082 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:37642 Rversion tag 65535 msize 262144 version '9P2000'
I1209 04:45:45.621863 1208082 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:37642 Tattach tag 0 fid 0 afid 4294967295 uname 'nobody' nuname 0 aname ''
I1209 04:45:45.621924 1208082 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:37642 Rattach tag 0 aqid (3b622b 16e015d 'd')
I1209 04:45:45.622208 1208082 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:37642 Tstat tag 0 fid 0
I1209 04:45:45.622276 1208082 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:37642 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (3b622b 16e015d 'd') m d775 at 0 mt 1765255545 l 4096 t 0 d 0 ext )
I1209 04:45:45.629234 1208082 lock.go:50] WriteFile acquiring /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/.mount-process: {Name:mkcd3861f1ffc16c0e82ed687c676d5a02cf21d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1209 04:45:45.629500 1208082 mount.go:105] mount successful: ""
I1209 04:45:45.632914 1208082 out.go:179] * Successfully mounted /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2835554719/001 to /mount-9p
I1209 04:45:45.635864 1208082 out.go:203] 
I1209 04:45:45.638821 1208082 out.go:179] * NOTE: This process must stay alive for the mount to be accessible ...
I1209 04:45:46.556797 1208082 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:37642 Tstat tag 0 fid 0
I1209 04:45:46.556871 1208082 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:37642 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (3b622b 16e015d 'd') m d775 at 0 mt 1765255545 l 4096 t 0 d 0 ext )
I1209 04:45:46.557254 1208082 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:37642 Twalk tag 0 fid 0 newfid 1 
I1209 04:45:46.557286 1208082 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:37642 Rwalk tag 0 
I1209 04:45:46.557448 1208082 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:37642 Topen tag 0 fid 1 mode 0
I1209 04:45:46.557505 1208082 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:37642 Ropen tag 0 qid (3b622b 16e015d 'd') iounit 0
I1209 04:45:46.557661 1208082 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:37642 Tstat tag 0 fid 0
I1209 04:45:46.557697 1208082 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:37642 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (3b622b 16e015d 'd') m d775 at 0 mt 1765255545 l 4096 t 0 d 0 ext )
I1209 04:45:46.557868 1208082 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:37642 Tread tag 0 fid 1 offset 0 count 262120
I1209 04:45:46.557992 1208082 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:37642 Rread tag 0 count 258
I1209 04:45:46.558134 1208082 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:37642 Tread tag 0 fid 1 offset 258 count 261862
I1209 04:45:46.558160 1208082 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:37642 Rread tag 0 count 0
I1209 04:45:46.558307 1208082 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:37642 Tread tag 0 fid 1 offset 258 count 262120
I1209 04:45:46.558336 1208082 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:37642 Rread tag 0 count 0
I1209 04:45:46.558468 1208082 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:37642 Twalk tag 0 fid 0 newfid 2 0:'created-by-test' 
I1209 04:45:46.558498 1208082 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:37642 Rwalk tag 0 (3b622d 16e015d '') 
I1209 04:45:46.558634 1208082 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:37642 Tstat tag 0 fid 2
I1209 04:45:46.558671 1208082 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:37642 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (3b622d 16e015d '') m 644 at 0 mt 1765255545 l 24 t 0 d 0 ext )
I1209 04:45:46.558796 1208082 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:37642 Tstat tag 0 fid 2
I1209 04:45:46.558831 1208082 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:37642 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (3b622d 16e015d '') m 644 at 0 mt 1765255545 l 24 t 0 d 0 ext )
I1209 04:45:46.558977 1208082 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:37642 Tclunk tag 0 fid 2
I1209 04:45:46.559000 1208082 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:37642 Rclunk tag 0
I1209 04:45:46.559151 1208082 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:37642 Twalk tag 0 fid 0 newfid 2 0:'test-1765255545185688072' 
I1209 04:45:46.559195 1208082 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:37642 Rwalk tag 0 (3b622f 16e015d '') 
I1209 04:45:46.559326 1208082 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:37642 Tstat tag 0 fid 2
I1209 04:45:46.559363 1208082 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:37642 Rstat tag 0 st ('test-1765255545185688072' 'jenkins' 'jenkins' '' q (3b622f 16e015d '') m 644 at 0 mt 1765255545 l 24 t 0 d 0 ext )
I1209 04:45:46.559533 1208082 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:37642 Tstat tag 0 fid 2
I1209 04:45:46.559601 1208082 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:37642 Rstat tag 0 st ('test-1765255545185688072' 'jenkins' 'jenkins' '' q (3b622f 16e015d '') m 644 at 0 mt 1765255545 l 24 t 0 d 0 ext )
I1209 04:45:46.559791 1208082 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:37642 Tclunk tag 0 fid 2
I1209 04:45:46.559816 1208082 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:37642 Rclunk tag 0
I1209 04:45:46.559958 1208082 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:37642 Twalk tag 0 fid 0 newfid 2 0:'created-by-test-removed-by-pod' 
I1209 04:45:46.560002 1208082 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:37642 Rwalk tag 0 (3b622e 16e015d '') 
I1209 04:45:46.560138 1208082 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:37642 Tstat tag 0 fid 2
I1209 04:45:46.560173 1208082 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:37642 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (3b622e 16e015d '') m 644 at 0 mt 1765255545 l 24 t 0 d 0 ext )
I1209 04:45:46.560305 1208082 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:37642 Tstat tag 0 fid 2
I1209 04:45:46.560347 1208082 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:37642 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (3b622e 16e015d '') m 644 at 0 mt 1765255545 l 24 t 0 d 0 ext )
I1209 04:45:46.560475 1208082 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:37642 Tclunk tag 0 fid 2
I1209 04:45:46.560501 1208082 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:37642 Rclunk tag 0
I1209 04:45:46.560661 1208082 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:37642 Tread tag 0 fid 1 offset 258 count 262120
I1209 04:45:46.560701 1208082 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:37642 Rread tag 0 count 0
I1209 04:45:46.560834 1208082 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:37642 Tclunk tag 0 fid 1
I1209 04:45:46.560862 1208082 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:37642 Rclunk tag 0
I1209 04:45:46.832109 1208082 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:37642 Twalk tag 0 fid 0 newfid 1 0:'test-1765255545185688072' 
I1209 04:45:46.832213 1208082 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:37642 Rwalk tag 0 (3b622f 16e015d '') 
I1209 04:45:46.832396 1208082 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:37642 Tstat tag 0 fid 1
I1209 04:45:46.832459 1208082 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:37642 Rstat tag 0 st ('test-1765255545185688072' 'jenkins' 'jenkins' '' q (3b622f 16e015d '') m 644 at 0 mt 1765255545 l 24 t 0 d 0 ext )
I1209 04:45:46.832584 1208082 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:37642 Twalk tag 0 fid 1 newfid 2 
I1209 04:45:46.832619 1208082 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:37642 Rwalk tag 0 
I1209 04:45:46.832771 1208082 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:37642 Topen tag 0 fid 2 mode 0
I1209 04:45:46.832819 1208082 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:37642 Ropen tag 0 qid (3b622f 16e015d '') iounit 0
I1209 04:45:46.832951 1208082 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:37642 Tstat tag 0 fid 1
I1209 04:45:46.832996 1208082 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:37642 Rstat tag 0 st ('test-1765255545185688072' 'jenkins' 'jenkins' '' q (3b622f 16e015d '') m 644 at 0 mt 1765255545 l 24 t 0 d 0 ext )
I1209 04:45:46.833143 1208082 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:37642 Tread tag 0 fid 2 offset 0 count 262120
I1209 04:45:46.833186 1208082 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:37642 Rread tag 0 count 24
I1209 04:45:46.833309 1208082 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:37642 Tread tag 0 fid 2 offset 24 count 262120
I1209 04:45:46.833339 1208082 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:37642 Rread tag 0 count 0
I1209 04:45:46.833487 1208082 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:37642 Tread tag 0 fid 2 offset 24 count 262120
I1209 04:45:46.833520 1208082 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:37642 Rread tag 0 count 0
I1209 04:45:46.833677 1208082 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:37642 Tclunk tag 0 fid 2
I1209 04:45:46.833711 1208082 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:37642 Rclunk tag 0
I1209 04:45:46.833902 1208082 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:37642 Tclunk tag 0 fid 1
I1209 04:45:46.833923 1208082 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:37642 Rclunk tag 0
I1209 04:45:47.162508 1208082 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:37642 Tstat tag 0 fid 0
I1209 04:45:47.162579 1208082 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:37642 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (3b622b 16e015d 'd') m d775 at 0 mt 1765255545 l 4096 t 0 d 0 ext )
I1209 04:45:47.162893 1208082 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:37642 Twalk tag 0 fid 0 newfid 1 
I1209 04:45:47.162929 1208082 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:37642 Rwalk tag 0 
I1209 04:45:47.163064 1208082 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:37642 Topen tag 0 fid 1 mode 0
I1209 04:45:47.163108 1208082 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:37642 Ropen tag 0 qid (3b622b 16e015d 'd') iounit 0
I1209 04:45:47.163250 1208082 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:37642 Tstat tag 0 fid 0
I1209 04:45:47.163286 1208082 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:37642 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (3b622b 16e015d 'd') m d775 at 0 mt 1765255545 l 4096 t 0 d 0 ext )
I1209 04:45:47.163436 1208082 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:37642 Tread tag 0 fid 1 offset 0 count 262120
I1209 04:45:47.163520 1208082 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:37642 Rread tag 0 count 258
I1209 04:45:47.163638 1208082 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:37642 Tread tag 0 fid 1 offset 258 count 261862
I1209 04:45:47.163665 1208082 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:37642 Rread tag 0 count 0
I1209 04:45:47.163786 1208082 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:37642 Tread tag 0 fid 1 offset 258 count 262120
I1209 04:45:47.163813 1208082 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:37642 Rread tag 0 count 0
I1209 04:45:47.163931 1208082 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:37642 Twalk tag 0 fid 0 newfid 2 0:'created-by-test' 
I1209 04:45:47.163959 1208082 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:37642 Rwalk tag 0 (3b622d 16e015d '') 
I1209 04:45:47.164108 1208082 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:37642 Tstat tag 0 fid 2
I1209 04:45:47.164147 1208082 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:37642 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (3b622d 16e015d '') m 644 at 0 mt 1765255545 l 24 t 0 d 0 ext )
I1209 04:45:47.164270 1208082 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:37642 Tstat tag 0 fid 2
I1209 04:45:47.164303 1208082 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:37642 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (3b622d 16e015d '') m 644 at 0 mt 1765255545 l 24 t 0 d 0 ext )
I1209 04:45:47.164430 1208082 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:37642 Tclunk tag 0 fid 2
I1209 04:45:47.164460 1208082 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:37642 Rclunk tag 0
I1209 04:45:47.164588 1208082 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:37642 Twalk tag 0 fid 0 newfid 2 0:'test-1765255545185688072' 
I1209 04:45:47.164620 1208082 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:37642 Rwalk tag 0 (3b622f 16e015d '') 
I1209 04:45:47.164745 1208082 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:37642 Tstat tag 0 fid 2
I1209 04:45:47.164777 1208082 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:37642 Rstat tag 0 st ('test-1765255545185688072' 'jenkins' 'jenkins' '' q (3b622f 16e015d '') m 644 at 0 mt 1765255545 l 24 t 0 d 0 ext )
I1209 04:45:47.164891 1208082 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:37642 Tstat tag 0 fid 2
I1209 04:45:47.164923 1208082 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:37642 Rstat tag 0 st ('test-1765255545185688072' 'jenkins' 'jenkins' '' q (3b622f 16e015d '') m 644 at 0 mt 1765255545 l 24 t 0 d 0 ext )
I1209 04:45:47.165043 1208082 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:37642 Tclunk tag 0 fid 2
I1209 04:45:47.165062 1208082 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:37642 Rclunk tag 0
I1209 04:45:47.165186 1208082 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:37642 Twalk tag 0 fid 0 newfid 2 0:'created-by-test-removed-by-pod' 
I1209 04:45:47.165216 1208082 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:37642 Rwalk tag 0 (3b622e 16e015d '') 
I1209 04:45:47.165333 1208082 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:37642 Tstat tag 0 fid 2
I1209 04:45:47.165358 1208082 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:37642 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (3b622e 16e015d '') m 644 at 0 mt 1765255545 l 24 t 0 d 0 ext )
I1209 04:45:47.165475 1208082 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:37642 Tstat tag 0 fid 2
I1209 04:45:47.165504 1208082 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:37642 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (3b622e 16e015d '') m 644 at 0 mt 1765255545 l 24 t 0 d 0 ext )
I1209 04:45:47.165628 1208082 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:37642 Tclunk tag 0 fid 2
I1209 04:45:47.165647 1208082 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:37642 Rclunk tag 0
I1209 04:45:47.165760 1208082 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:37642 Tread tag 0 fid 1 offset 258 count 262120
I1209 04:45:47.165785 1208082 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:37642 Rread tag 0 count 0
I1209 04:45:47.165933 1208082 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:37642 Tclunk tag 0 fid 1
I1209 04:45:47.165964 1208082 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:37642 Rclunk tag 0
I1209 04:45:47.167199 1208082 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:37642 Twalk tag 0 fid 0 newfid 1 0:'pod-dates' 
I1209 04:45:47.167262 1208082 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:37642 Rerror tag 0 ename 'file not found' ecode 0
I1209 04:45:47.450107 1208082 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:37642 Tclunk tag 0 fid 0
I1209 04:45:47.450158 1208082 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:37642 Rclunk tag 0
I1209 04:45:47.451180 1208082 main.go:127] stdlog: ufs.go:147 disconnected
I1209 04:45:47.472707 1208082 out.go:179] * Unmounting /mount-9p ...
I1209 04:45:47.475642 1208082 ssh_runner.go:195] Run: /bin/bash -c "[ "x$(findmnt -T /mount-9p | grep /mount-9p)" != "x" ] && sudo umount -f -l /mount-9p || echo "
I1209 04:45:47.482515 1208082 mount.go:180] unmount for /mount-9p ran successfully
I1209 04:45:47.482662 1208082 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/.mount-process: {Name:mkcd3861f1ffc16c0e82ed687c676d5a02cf21d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1209 04:45:47.485935 1208082 out.go:203] 
W1209 04:45:47.488815 1208082 out.go:285] X Exiting due to MK_INTERRUPTED: Received terminated signal
X Exiting due to MK_INTERRUPTED: Received terminated signal
I1209 04:45:47.491746 1208082 out.go:203] 
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (2.39s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-667319 create deployment hello-node --image kicbase/echo-server
functional_test.go:1451: (dbg) Non-zero exit: kubectl --context functional-667319 create deployment hello-node --image kicbase/echo-server: exit status 1 (55.603967ms)

                                                
                                                
** stderr ** 
	error: failed to create deployment: Post "https://192.168.49.2:8441/apis/apps/v1/namespaces/default/deployments?fieldManager=kubectl-create&fieldValidation=Strict": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test.go:1453: failed to create hello-node deployment with this command "kubectl --context functional-667319 create deployment hello-node --image kicbase/echo-server": exit status 1.
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (0.27s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-667319 service list
functional_test.go:1469: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-667319 service list: exit status 103 (268.06058ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-667319 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-667319"

                                                
                                                
-- /stdout --
functional_test.go:1471: failed to do service list. args "out/minikube-linux-arm64 -p functional-667319 service list" : exit status 103
functional_test.go:1474: expected 'service list' to contain *hello-node* but got -"* The control-plane node functional-667319 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-667319\"\n"-
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (0.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (0.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-667319 service list -o json
functional_test.go:1499: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-667319 service list -o json: exit status 103 (256.100069ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-667319 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-667319"

                                                
                                                
-- /stdout --
functional_test.go:1501: failed to list services with json format. args "out/minikube-linux-arm64 -p functional-667319 service list -o json": exit status 103
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (0.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-667319 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-667319 service --namespace=default --https --url hello-node: exit status 103 (258.982398ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-667319 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-667319"

                                                
                                                
-- /stdout --
functional_test.go:1521: failed to get service url. args "out/minikube-linux-arm64 -p functional-667319 service --namespace=default --https --url hello-node" : exit status 103
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.28s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-667319 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-667319 service hello-node --url --format={{.IP}}: exit status 103 (280.120868ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-667319 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-667319"

                                                
                                                
-- /stdout --
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-arm64 -p functional-667319 service hello-node --url --format={{.IP}}": exit status 103
functional_test.go:1558: "* The control-plane node functional-667319 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-667319\"" is not a valid IP
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.28s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-667319 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-667319 service hello-node --url: exit status 103 (260.76295ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-667319 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-667319"

                                                
                                                
-- /stdout --
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-arm64 -p functional-667319 service hello-node --url": exit status 103
functional_test.go:1575: found endpoint for hello-node: * The control-plane node functional-667319 apiserver is not running: (state=Stopped)
To start a cluster, run: "minikube start -p functional-667319"
functional_test.go:1579: failed to parse "* The control-plane node functional-667319 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-667319\"": parse "* The control-plane node functional-667319 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-667319\"": net/url: invalid control character in URL
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.26s)

                                                
                                    
x
+
TestKubernetesUpgrade (796.41s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-511751 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-511751 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (37.987427193s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-511751
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-511751: (1.365452009s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-511751 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-511751 status --format={{.Host}}: exit status 7 (70.006688ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-511751 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-511751 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: exit status 109 (12m31.321792069s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-511751] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22081
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22081-1142328/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-1142328/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "kubernetes-upgrade-511751" primary control-plane node in "kubernetes-upgrade-511751" cluster
	* Pulling base image v0.0.48-1765184860-22066 ...
	* Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 05:18:19.409558 1340508 out.go:360] Setting OutFile to fd 1 ...
	I1209 05:18:19.409718 1340508 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 05:18:19.409743 1340508 out.go:374] Setting ErrFile to fd 2...
	I1209 05:18:19.409770 1340508 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 05:18:19.410081 1340508 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-1142328/.minikube/bin
	I1209 05:18:19.410580 1340508 out.go:368] Setting JSON to false
	I1209 05:18:19.411656 1340508 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":28823,"bootTime":1765228677,"procs":196,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1209 05:18:19.411733 1340508 start.go:143] virtualization:  
	I1209 05:18:19.417015 1340508 out.go:179] * [kubernetes-upgrade-511751] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1209 05:18:19.420081 1340508 out.go:179]   - MINIKUBE_LOCATION=22081
	I1209 05:18:19.420198 1340508 notify.go:221] Checking for updates...
	I1209 05:18:19.425987 1340508 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 05:18:19.429062 1340508 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22081-1142328/kubeconfig
	I1209 05:18:19.432059 1340508 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-1142328/.minikube
	I1209 05:18:19.435021 1340508 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1209 05:18:19.437996 1340508 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 05:18:19.441411 1340508 config.go:182] Loaded profile config "kubernetes-upgrade-511751": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1209 05:18:19.442024 1340508 driver.go:422] Setting default libvirt URI to qemu:///system
	I1209 05:18:19.461397 1340508 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1209 05:18:19.461503 1340508 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 05:18:19.523679 1340508 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-09 05:18:19.514217453 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1209 05:18:19.523776 1340508 docker.go:319] overlay module found
	I1209 05:18:19.527004 1340508 out.go:179] * Using the docker driver based on existing profile
	I1209 05:18:19.529839 1340508 start.go:309] selected driver: docker
	I1209 05:18:19.529863 1340508 start.go:927] validating driver "docker" against &{Name:kubernetes-upgrade-511751 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:kubernetes-upgrade-511751 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cu
stomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 05:18:19.529975 1340508 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 05:18:19.530723 1340508 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 05:18:19.582691 1340508 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-09 05:18:19.574044752 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1209 05:18:19.583009 1340508 cni.go:84] Creating CNI manager for ""
	I1209 05:18:19.583083 1340508 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1209 05:18:19.583126 1340508 start.go:353] cluster config:
	{Name:kubernetes-upgrade-511751 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-511751 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain
:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stat
icIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 05:18:19.586173 1340508 out.go:179] * Starting "kubernetes-upgrade-511751" primary control-plane node in "kubernetes-upgrade-511751" cluster
	I1209 05:18:19.589032 1340508 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1209 05:18:19.592106 1340508 out.go:179] * Pulling base image v0.0.48-1765184860-22066 ...
	I1209 05:18:19.594985 1340508 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1209 05:18:19.595020 1340508 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local docker daemon
	I1209 05:18:19.595035 1340508 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1209 05:18:19.595045 1340508 cache.go:65] Caching tarball of preloaded images
	I1209 05:18:19.595141 1340508 preload.go:238] Found /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1209 05:18:19.595151 1340508 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1209 05:18:19.595255 1340508 profile.go:143] Saving config to /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/kubernetes-upgrade-511751/config.json ...
	I1209 05:18:19.617821 1340508 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local docker daemon, skipping pull
	I1209 05:18:19.617848 1340508 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c exists in daemon, skipping load
	I1209 05:18:19.617864 1340508 cache.go:243] Successfully downloaded all kic artifacts
	I1209 05:18:19.617895 1340508 start.go:360] acquireMachinesLock for kubernetes-upgrade-511751: {Name:mkb6ff89cf565c58f2974727a056dc5368195c4a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 05:18:19.617959 1340508 start.go:364] duration metric: took 40.648µs to acquireMachinesLock for "kubernetes-upgrade-511751"
	I1209 05:18:19.617983 1340508 start.go:96] Skipping create...Using existing machine configuration
	I1209 05:18:19.617992 1340508 fix.go:54] fixHost starting: 
	I1209 05:18:19.618273 1340508 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-511751 --format={{.State.Status}}
	I1209 05:18:19.635209 1340508 fix.go:112] recreateIfNeeded on kubernetes-upgrade-511751: state=Stopped err=<nil>
	W1209 05:18:19.635240 1340508 fix.go:138] unexpected machine state, will restart: <nil>
	I1209 05:18:19.638558 1340508 out.go:252] * Restarting existing docker container for "kubernetes-upgrade-511751" ...
	I1209 05:18:19.638659 1340508 cli_runner.go:164] Run: docker start kubernetes-upgrade-511751
	I1209 05:18:19.955115 1340508 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-511751 --format={{.State.Status}}
	I1209 05:18:19.981964 1340508 kic.go:430] container "kubernetes-upgrade-511751" state is running.
	I1209 05:18:19.982349 1340508 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-511751
	I1209 05:18:20.009651 1340508 profile.go:143] Saving config to /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/kubernetes-upgrade-511751/config.json ...
	I1209 05:18:20.009929 1340508 machine.go:94] provisionDockerMachine start ...
	I1209 05:18:20.009998 1340508 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-511751
	I1209 05:18:20.046978 1340508 main.go:143] libmachine: Using SSH client type: native
	I1209 05:18:20.047316 1340508 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 34125 <nil> <nil>}
	I1209 05:18:20.047330 1340508 main.go:143] libmachine: About to run SSH command:
	hostname
	I1209 05:18:20.047881 1340508 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:56594->127.0.0.1:34125: read: connection reset by peer
	I1209 05:18:23.199714 1340508 main.go:143] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-511751
	
	I1209 05:18:23.199739 1340508 ubuntu.go:182] provisioning hostname "kubernetes-upgrade-511751"
	I1209 05:18:23.199814 1340508 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-511751
	I1209 05:18:23.217540 1340508 main.go:143] libmachine: Using SSH client type: native
	I1209 05:18:23.217864 1340508 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 34125 <nil> <nil>}
	I1209 05:18:23.217879 1340508 main.go:143] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-511751 && echo "kubernetes-upgrade-511751" | sudo tee /etc/hostname
	I1209 05:18:23.377108 1340508 main.go:143] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-511751
	
	I1209 05:18:23.377184 1340508 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-511751
	I1209 05:18:23.394451 1340508 main.go:143] libmachine: Using SSH client type: native
	I1209 05:18:23.394765 1340508 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 34125 <nil> <nil>}
	I1209 05:18:23.394786 1340508 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-511751' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-511751/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-511751' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 05:18:23.544254 1340508 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1209 05:18:23.544332 1340508 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22081-1142328/.minikube CaCertPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22081-1142328/.minikube}
	I1209 05:18:23.544372 1340508 ubuntu.go:190] setting up certificates
	I1209 05:18:23.544408 1340508 provision.go:84] configureAuth start
	I1209 05:18:23.544488 1340508 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-511751
	I1209 05:18:23.560746 1340508 provision.go:143] copyHostCerts
	I1209 05:18:23.560819 1340508 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.pem, removing ...
	I1209 05:18:23.560834 1340508 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.pem
	I1209 05:18:23.560917 1340508 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.pem (1078 bytes)
	I1209 05:18:23.561021 1340508 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-1142328/.minikube/cert.pem, removing ...
	I1209 05:18:23.561031 1340508 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-1142328/.minikube/cert.pem
	I1209 05:18:23.561059 1340508 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22081-1142328/.minikube/cert.pem (1123 bytes)
	I1209 05:18:23.561120 1340508 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-1142328/.minikube/key.pem, removing ...
	I1209 05:18:23.561129 1340508 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-1142328/.minikube/key.pem
	I1209 05:18:23.561156 1340508 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22081-1142328/.minikube/key.pem (1675 bytes)
	I1209 05:18:23.561213 1340508 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-511751 san=[127.0.0.1 192.168.76.2 kubernetes-upgrade-511751 localhost minikube]
	I1209 05:18:23.729228 1340508 provision.go:177] copyRemoteCerts
	I1209 05:18:23.729298 1340508 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 05:18:23.729349 1340508 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-511751
	I1209 05:18:23.748653 1340508 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34125 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/kubernetes-upgrade-511751/id_rsa Username:docker}
	I1209 05:18:23.851701 1340508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1209 05:18:23.870044 1340508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1209 05:18:23.888312 1340508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1209 05:18:23.905224 1340508 provision.go:87] duration metric: took 360.776448ms to configureAuth
	I1209 05:18:23.905264 1340508 ubuntu.go:206] setting minikube options for container-runtime
	I1209 05:18:23.905441 1340508 config.go:182] Loaded profile config "kubernetes-upgrade-511751": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1209 05:18:23.905455 1340508 machine.go:97] duration metric: took 3.895516297s to provisionDockerMachine
	I1209 05:18:23.905464 1340508 start.go:293] postStartSetup for "kubernetes-upgrade-511751" (driver="docker")
	I1209 05:18:23.905476 1340508 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 05:18:23.905531 1340508 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 05:18:23.905574 1340508 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-511751
	I1209 05:18:23.924604 1340508 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34125 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/kubernetes-upgrade-511751/id_rsa Username:docker}
	I1209 05:18:24.028624 1340508 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 05:18:24.031994 1340508 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1209 05:18:24.032046 1340508 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1209 05:18:24.032059 1340508 filesync.go:126] Scanning /home/jenkins/minikube-integration/22081-1142328/.minikube/addons for local assets ...
	I1209 05:18:24.032116 1340508 filesync.go:126] Scanning /home/jenkins/minikube-integration/22081-1142328/.minikube/files for local assets ...
	I1209 05:18:24.032215 1340508 filesync.go:149] local asset: /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/ssl/certs/11442312.pem -> 11442312.pem in /etc/ssl/certs
	I1209 05:18:24.032333 1340508 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 05:18:24.039838 1340508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/ssl/certs/11442312.pem --> /etc/ssl/certs/11442312.pem (1708 bytes)
	I1209 05:18:24.058093 1340508 start.go:296] duration metric: took 152.61397ms for postStartSetup
	I1209 05:18:24.058189 1340508 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1209 05:18:24.058228 1340508 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-511751
	I1209 05:18:24.075721 1340508 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34125 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/kubernetes-upgrade-511751/id_rsa Username:docker}
	I1209 05:18:24.177608 1340508 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1209 05:18:24.182040 1340508 fix.go:56] duration metric: took 4.56404129s for fixHost
	I1209 05:18:24.182066 1340508 start.go:83] releasing machines lock for "kubernetes-upgrade-511751", held for 4.564093063s
	I1209 05:18:24.182165 1340508 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-511751
	I1209 05:18:24.198415 1340508 ssh_runner.go:195] Run: cat /version.json
	I1209 05:18:24.198435 1340508 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 05:18:24.198476 1340508 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-511751
	I1209 05:18:24.198502 1340508 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-511751
	I1209 05:18:24.215457 1340508 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34125 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/kubernetes-upgrade-511751/id_rsa Username:docker}
	I1209 05:18:24.220145 1340508 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34125 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/kubernetes-upgrade-511751/id_rsa Username:docker}
	I1209 05:18:24.425535 1340508 ssh_runner.go:195] Run: systemctl --version
	I1209 05:18:24.432001 1340508 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 05:18:24.436051 1340508 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 05:18:24.436120 1340508 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 05:18:24.446182 1340508 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1209 05:18:24.446247 1340508 start.go:496] detecting cgroup driver to use...
	I1209 05:18:24.446299 1340508 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1209 05:18:24.446354 1340508 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1209 05:18:24.462983 1340508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1209 05:18:24.476150 1340508 docker.go:218] disabling cri-docker service (if available) ...
	I1209 05:18:24.476251 1340508 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 05:18:24.491573 1340508 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 05:18:24.504721 1340508 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 05:18:24.619470 1340508 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 05:18:24.734717 1340508 docker.go:234] disabling docker service ...
	I1209 05:18:24.734873 1340508 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 05:18:24.749338 1340508 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 05:18:24.762734 1340508 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 05:18:24.877343 1340508 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 05:18:24.988746 1340508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 05:18:25.001589 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 05:18:25.020310 1340508 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1209 05:18:25.030859 1340508 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1209 05:18:25.039782 1340508 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1209 05:18:25.039893 1340508 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1209 05:18:25.048753 1340508 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1209 05:18:25.057357 1340508 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1209 05:18:25.066064 1340508 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1209 05:18:25.074645 1340508 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 05:18:25.082937 1340508 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1209 05:18:25.092387 1340508 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1209 05:18:25.101430 1340508 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1209 05:18:25.115613 1340508 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 05:18:25.124681 1340508 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 05:18:25.132227 1340508 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 05:18:25.239786 1340508 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1209 05:18:25.398997 1340508 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1209 05:18:25.399109 1340508 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1209 05:18:25.403199 1340508 start.go:564] Will wait 60s for crictl version
	I1209 05:18:25.403302 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:18:25.406551 1340508 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1209 05:18:25.430050 1340508 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1209 05:18:25.430164 1340508 ssh_runner.go:195] Run: containerd --version
	I1209 05:18:25.452565 1340508 ssh_runner.go:195] Run: containerd --version
	I1209 05:18:25.484999 1340508 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1209 05:18:25.487955 1340508 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-511751 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1209 05:18:25.503511 1340508 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1209 05:18:25.507527 1340508 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 05:18:25.516807 1340508 kubeadm.go:884] updating cluster {Name:kubernetes-upgrade-511751 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-511751 Namespace:default APIServerHAVIP: APISe
rverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custo
mQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 05:18:25.516941 1340508 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1209 05:18:25.517007 1340508 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 05:18:25.544256 1340508 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-beta.0". assuming images are not preloaded.
	I1209 05:18:25.544328 1340508 ssh_runner.go:195] Run: which lz4
	I1209 05:18:25.548052 1340508 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1209 05:18:25.551303 1340508 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1209 05:18:25.551333 1340508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (305624510 bytes)
	I1209 05:18:27.261625 1340508 containerd.go:563] duration metric: took 1.713636288s to copy over tarball
	I1209 05:18:27.261697 1340508 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1209 05:18:29.280888 1340508 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.019160026s)
	I1209 05:18:29.280990 1340508 kubeadm.go:910] preload failed, will try to load cached images: extracting tarball: 
	** stderr ** 
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Europe: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Brazil: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Canada: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Antarctica: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Chile: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Etc: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Pacific: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Mexico: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Australia: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/US: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Asia: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Atlantic: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/America: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Arctic: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Africa: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Indian: Cannot open: File exists
	tar: Exiting with failure status due to previous errors
	
	** /stderr **: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: Process exited with status 2
	stdout:
	
	stderr:
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Europe: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Brazil: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Canada: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Antarctica: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Chile: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Etc: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Pacific: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Mexico: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Australia: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/US: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Asia: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Atlantic: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/America: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Arctic: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Africa: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Indian: Cannot open: File exists
	tar: Exiting with failure status due to previous errors
	I1209 05:18:29.281168 1340508 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 05:18:29.313926 1340508 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-beta.0". assuming images are not preloaded.
	I1209 05:18:29.313952 1340508 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.35.0-beta.0 registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 registry.k8s.io/kube-scheduler:v1.35.0-beta.0 registry.k8s.io/kube-proxy:v1.35.0-beta.0 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.5-0 registry.k8s.io/coredns/coredns:v1.13.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1209 05:18:29.314137 1340508 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 05:18:29.314161 1340508 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1209 05:18:29.314360 1340508 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1209 05:18:29.314371 1340508 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1209 05:18:29.314465 1340508 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.5-0
	I1209 05:18:29.314486 1340508 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1209 05:18:29.314551 1340508 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1209 05:18:29.314586 1340508 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1209 05:18:29.316634 1340508 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1209 05:18:29.316747 1340508 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1209 05:18:29.316639 1340508 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.5-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.5-0
	I1209 05:18:29.317000 1340508 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1209 05:18:29.317178 1340508 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1209 05:18:29.317251 1340508 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1209 05:18:29.317320 1340508 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 05:18:29.317202 1340508 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1209 05:18:29.644613 1340508 containerd.go:267] Checking existence of image with name "registry.k8s.io/pause:3.10.1" and sha "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd"
	I1209 05:18:29.644717 1340508 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/pause:3.10.1
	I1209 05:18:29.670210 1340508 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" and sha "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b"
	I1209 05:18:29.670311 1340508 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1209 05:18:29.676818 1340508 containerd.go:267] Checking existence of image with name "registry.k8s.io/coredns/coredns:v1.13.1" and sha "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf"
	I1209 05:18:29.676895 1340508 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/coredns/coredns:v1.13.1
	I1209 05:18:29.678379 1340508 containerd.go:267] Checking existence of image with name "registry.k8s.io/etcd:3.6.5-0" and sha "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42"
	I1209 05:18:29.678440 1340508 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/etcd:3.6.5-0
	I1209 05:18:29.698921 1340508 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" and sha "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be"
	I1209 05:18:29.698996 1340508 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1209 05:18:29.699347 1340508 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" and sha "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4"
	I1209 05:18:29.699390 1340508 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1209 05:18:29.717311 1340508 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
	I1209 05:18:29.717401 1340508 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1209 05:18:29.717491 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:18:29.717610 1340508 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" does not exist at hash "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b" in container runtime
	I1209 05:18:29.717647 1340508 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1209 05:18:29.717706 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:18:29.729491 1340508 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.13.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.13.1" does not exist at hash "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf" in container runtime
	I1209 05:18:29.729580 1340508 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.13.1
	I1209 05:18:29.729658 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:18:29.732470 1340508 cache_images.go:118] "registry.k8s.io/etcd:3.6.5-0" needs transfer: "registry.k8s.io/etcd:3.6.5-0" does not exist at hash "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42" in container runtime
	I1209 05:18:29.732558 1340508 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.5-0
	I1209 05:18:29.732640 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:18:29.755954 1340508 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" does not exist at hash "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4" in container runtime
	I1209 05:18:29.756132 1340508 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1209 05:18:29.756202 1340508 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1209 05:18:29.755993 1340508 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" does not exist at hash "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be" in container runtime
	I1209 05:18:29.756274 1340508 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1209 05:18:29.756311 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:18:29.756342 1340508 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1209 05:18:29.756398 1340508 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1209 05:18:29.756212 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:18:29.756515 1340508 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1209 05:18:29.759172 1340508 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-proxy:v1.35.0-beta.0" and sha "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904"
	I1209 05:18:29.759244 1340508 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1209 05:18:29.837356 1340508 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1209 05:18:29.837438 1340508 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1209 05:18:29.837486 1340508 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1209 05:18:29.837536 1340508 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1209 05:18:29.837596 1340508 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1209 05:18:29.837655 1340508 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1209 05:18:29.837730 1340508 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.35.0-beta.0" does not exist at hash "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904" in container runtime
	I1209 05:18:29.837756 1340508 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1209 05:18:29.837784 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:18:29.935710 1340508 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1209 05:18:29.935786 1340508 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1209 05:18:29.935839 1340508 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1209 05:18:29.935910 1340508 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1209 05:18:29.935966 1340508 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1209 05:18:29.936131 1340508 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1209 05:18:29.936189 1340508 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1209 05:18:30.056605 1340508 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0
	I1209 05:18:30.056697 1340508 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1209 05:18:30.056850 1340508 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1209 05:18:30.056855 1340508 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0
	I1209 05:18:30.059714 1340508 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0
	I1209 05:18:30.059916 1340508 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1209 05:18:30.060101 1340508 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1
	I1209 05:18:30.060181 1340508 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1209 05:18:30.060228 1340508 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1209 05:18:30.065415 1340508 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.5-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.5-0': No such file or directory
	I1209 05:18:30.065495 1340508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 --> /var/lib/minikube/images/etcd_3.6.5-0 (21148160 bytes)
	I1209 05:18:30.066696 1340508 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1209 05:18:30.066730 1340508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
	I1209 05:18:30.148587 1340508 containerd.go:285] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1209 05:18:30.148700 1340508 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.10.1
	I1209 05:18:30.170004 1340508 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0
	I1209 05:18:30.170127 1340508 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0
	I1209 05:18:30.170239 1340508 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1209 05:18:30.322429 1340508 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0
	I1209 05:18:30.370571 1340508 containerd.go:285] Loading image: /var/lib/minikube/images/etcd_3.6.5-0
	I1209 05:18:30.370644 1340508 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.5-0
	W1209 05:18:30.565315 1340508 image.go:328] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1209 05:18:30.565468 1340508 containerd.go:267] Checking existence of image with name "gcr.io/k8s-minikube/storage-provisioner:v5" and sha "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51"
	I1209 05:18:30.565531 1340508 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 05:18:31.000694 1340508 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1209 05:18:31.000734 1340508 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 05:18:31.000789 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:18:31.005923 1340508 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 05:18:31.138897 1340508 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1209 05:18:31.139054 1340508 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1209 05:18:31.143097 1340508 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1209 05:18:31.143149 1340508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1209 05:18:31.215714 1340508 containerd.go:285] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1209 05:18:31.215838 1340508 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I1209 05:18:31.567464 1340508 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1209 05:18:31.567522 1340508 cache_images.go:94] duration metric: took 2.25355474s to LoadCachedImages
	W1209 05:18:31.567600 1340508 out.go:285] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0: no such file or directory
	I1209 05:18:31.567613 1340508 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1209 05:18:31.567704 1340508 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-511751 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-511751 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 05:18:31.567774 1340508 ssh_runner.go:195] Run: sudo crictl info
	I1209 05:18:31.603847 1340508 cni.go:84] Creating CNI manager for ""
	I1209 05:18:31.603865 1340508 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1209 05:18:31.603883 1340508 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1209 05:18:31.603906 1340508 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-511751 NodeName:kubernetes-upgrade-511751 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/
certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1209 05:18:31.604044 1340508 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "kubernetes-upgrade-511751"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 05:18:31.604105 1340508 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1209 05:18:31.615991 1340508 binaries.go:51] Found k8s binaries, skipping transfer
	I1209 05:18:31.616123 1340508 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1209 05:18:31.625920 1340508 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (336 bytes)
	I1209 05:18:31.640908 1340508 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1209 05:18:31.654364 1340508 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2245 bytes)
	I1209 05:18:31.667250 1340508 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1209 05:18:31.670739 1340508 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 05:18:31.680277 1340508 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 05:18:31.801705 1340508 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 05:18:31.820464 1340508 certs.go:69] Setting up /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/kubernetes-upgrade-511751 for IP: 192.168.76.2
	I1209 05:18:31.820489 1340508 certs.go:195] generating shared ca certs ...
	I1209 05:18:31.820505 1340508 certs.go:227] acquiring lock for ca certs: {Name:mk15788702f8c4e23b5aeab3f44961d296fab259 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 05:18:31.820635 1340508 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.key
	I1209 05:18:31.820685 1340508 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/proxy-client-ca.key
	I1209 05:18:31.820703 1340508 certs.go:257] generating profile certs ...
	I1209 05:18:31.820793 1340508 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/kubernetes-upgrade-511751/client.key
	I1209 05:18:31.820842 1340508 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/kubernetes-upgrade-511751/apiserver.key.7aaa44d1
	I1209 05:18:31.820887 1340508 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/kubernetes-upgrade-511751/proxy-client.key
	I1209 05:18:31.821003 1340508 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/1144231.pem (1338 bytes)
	W1209 05:18:31.821070 1340508 certs.go:480] ignoring /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/1144231_empty.pem, impossibly tiny 0 bytes
	I1209 05:18:31.821083 1340508 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca-key.pem (1679 bytes)
	I1209 05:18:31.821111 1340508 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem (1078 bytes)
	I1209 05:18:31.821149 1340508 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/cert.pem (1123 bytes)
	I1209 05:18:31.821176 1340508 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/key.pem (1675 bytes)
	I1209 05:18:31.821223 1340508 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/ssl/certs/11442312.pem (1708 bytes)
	I1209 05:18:31.821785 1340508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 05:18:31.844891 1340508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1209 05:18:31.863591 1340508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 05:18:31.885363 1340508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1209 05:18:31.903668 1340508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/kubernetes-upgrade-511751/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1209 05:18:31.921290 1340508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/kubernetes-upgrade-511751/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1209 05:18:31.938269 1340508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/kubernetes-upgrade-511751/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 05:18:31.956457 1340508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/kubernetes-upgrade-511751/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1209 05:18:31.974372 1340508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/ssl/certs/11442312.pem --> /usr/share/ca-certificates/11442312.pem (1708 bytes)
	I1209 05:18:31.990798 1340508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 05:18:32.009715 1340508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/1144231.pem --> /usr/share/ca-certificates/1144231.pem (1338 bytes)
	I1209 05:18:32.027866 1340508 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 05:18:32.041373 1340508 ssh_runner.go:195] Run: openssl version
	I1209 05:18:32.047838 1340508 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/11442312.pem
	I1209 05:18:32.055328 1340508 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/11442312.pem /etc/ssl/certs/11442312.pem
	I1209 05:18:32.062793 1340508 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11442312.pem
	I1209 05:18:32.066756 1340508 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 04:18 /usr/share/ca-certificates/11442312.pem
	I1209 05:18:32.066870 1340508 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11442312.pem
	I1209 05:18:32.107997 1340508 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1209 05:18:32.115998 1340508 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1209 05:18:32.123348 1340508 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1209 05:18:32.130918 1340508 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 05:18:32.134797 1340508 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 04:09 /usr/share/ca-certificates/minikubeCA.pem
	I1209 05:18:32.134917 1340508 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 05:18:32.176240 1340508 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1209 05:18:32.183738 1340508 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1144231.pem
	I1209 05:18:32.190992 1340508 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1144231.pem /etc/ssl/certs/1144231.pem
	I1209 05:18:32.198214 1340508 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1144231.pem
	I1209 05:18:32.201887 1340508 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 04:18 /usr/share/ca-certificates/1144231.pem
	I1209 05:18:32.201971 1340508 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1144231.pem
	I1209 05:18:32.243868 1340508 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1209 05:18:32.251967 1340508 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 05:18:32.255997 1340508 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1209 05:18:32.297010 1340508 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1209 05:18:32.341764 1340508 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1209 05:18:32.386213 1340508 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1209 05:18:32.429078 1340508 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1209 05:18:32.476990 1340508 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1209 05:18:32.528191 1340508 kubeadm.go:401] StartCluster: {Name:kubernetes-upgrade-511751 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-511751 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 05:18:32.528336 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1209 05:18:32.528430 1340508 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 05:18:32.556335 1340508 cri.go:89] found id: "7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547"
	I1209 05:18:32.556415 1340508 cri.go:89] found id: "ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a"
	I1209 05:18:32.556436 1340508 cri.go:89] found id: "40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b"
	I1209 05:18:32.556447 1340508 cri.go:89] found id: "f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da"
	I1209 05:18:32.556452 1340508 cri.go:89] found id: ""
	I1209 05:18:32.556523 1340508 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	W1209 05:18:32.588281 1340508 kubeadm.go:408] unpause failed: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-09T05:18:32Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	I1209 05:18:32.588350 1340508 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1209 05:18:32.596311 1340508 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1209 05:18:32.596332 1340508 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1209 05:18:32.596389 1340508 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1209 05:18:32.603965 1340508 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1209 05:18:32.604621 1340508 kubeconfig.go:47] verify endpoint returned: get endpoint: "kubernetes-upgrade-511751" does not appear in /home/jenkins/minikube-integration/22081-1142328/kubeconfig
	I1209 05:18:32.604913 1340508 kubeconfig.go:62] /home/jenkins/minikube-integration/22081-1142328/kubeconfig needs updating (will repair): [kubeconfig missing "kubernetes-upgrade-511751" cluster setting kubeconfig missing "kubernetes-upgrade-511751" context setting]
	I1209 05:18:32.605380 1340508 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1142328/kubeconfig: {Name:mk0dc127429f88dc4fdfb2d110deebc58207c1b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 05:18:32.606069 1340508 kapi.go:59] client config for kubernetes-upgrade-511751: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/kubernetes-upgrade-511751/client.crt", KeyFile:"/home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/kubernetes-upgrade-511751/client.key", CAFile:"/home/jenkins/minikube-integration/22081-1142328/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8
(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb3ec0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1209 05:18:32.606594 1340508 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1209 05:18:32.606612 1340508 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1209 05:18:32.606618 1340508 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1209 05:18:32.606627 1340508 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1209 05:18:32.606632 1340508 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1209 05:18:32.606919 1340508 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1209 05:18:32.618047 1340508 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-09 05:17:56.494480375 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-09 05:18:31.661632970 +0000
	@@ -1,4 +1,4 @@
	-apiVersion: kubeadm.k8s.io/v1beta3
	+apiVersion: kubeadm.k8s.io/v1beta4
	 kind: InitConfiguration
	 localAPIEndpoint:
	   advertiseAddress: 192.168.76.2
	@@ -14,31 +14,34 @@
	   criSocket: unix:///run/containerd/containerd.sock
	   name: "kubernetes-upgrade-511751"
	   kubeletExtraArgs:
	-    node-ip: 192.168.76.2
	+    - name: "node-ip"
	+      value: "192.168.76.2"
	   taints: []
	 ---
	-apiVersion: kubeadm.k8s.io/v1beta3
	+apiVersion: kubeadm.k8s.io/v1beta4
	 kind: ClusterConfiguration
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	   extraArgs:
	-    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+    - name: "enable-admission-plugins"
	+      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	 controllerManager:
	   extraArgs:
	-    allocate-node-cidrs: "true"
	-    leader-elect: "false"
	+    - name: "allocate-node-cidrs"
	+      value: "true"
	+    - name: "leader-elect"
	+      value: "false"
	 scheduler:
	   extraArgs:
	-    leader-elect: "false"
	+    - name: "leader-elect"
	+      value: "false"
	 certificatesDir: /var/lib/minikube/certs
	 clusterName: mk
	 controlPlaneEndpoint: control-plane.minikube.internal:8443
	 etcd:
	   local:
	     dataDir: /var/lib/minikube/etcd
	-    extraArgs:
	-      proxy-refresh-interval: "70000"
	-kubernetesVersion: v1.28.0
	+kubernetesVersion: v1.35.0-beta.0
	 networking:
	   dnsDomain: cluster.local
	   podSubnet: "10.244.0.0/16"
	
	-- /stdout --
	I1209 05:18:32.618169 1340508 kubeadm.go:1161] stopping kube-system containers ...
	I1209 05:18:32.618228 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I1209 05:18:32.618333 1340508 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 05:18:32.647580 1340508 cri.go:89] found id: "7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547"
	I1209 05:18:32.647648 1340508 cri.go:89] found id: "ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a"
	I1209 05:18:32.647669 1340508 cri.go:89] found id: "40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b"
	I1209 05:18:32.647688 1340508 cri.go:89] found id: "f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da"
	I1209 05:18:32.647706 1340508 cri.go:89] found id: ""
	I1209 05:18:32.647742 1340508 cri.go:252] Stopping containers: [7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547 ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a 40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da]
	I1209 05:18:32.647825 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:18:32.651532 1340508 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl stop --timeout=10 7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547 ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a 40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da
	I1209 05:18:32.694046 1340508 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1209 05:18:32.713239 1340508 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 05:18:32.721494 1340508 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5643 Dec  9 05:18 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Dec  9 05:18 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2039 Dec  9 05:18 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Dec  9 05:18 /etc/kubernetes/scheduler.conf
	
	I1209 05:18:32.721612 1340508 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1209 05:18:32.729336 1340508 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1209 05:18:32.736907 1340508 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1209 05:18:32.746246 1340508 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1209 05:18:32.746375 1340508 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 05:18:32.754485 1340508 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1209 05:18:32.761615 1340508 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1209 05:18:32.761728 1340508 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 05:18:32.769087 1340508 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 05:18:32.776388 1340508 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 05:18:32.820979 1340508 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 05:18:34.353495 1340508 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.532436115s)
	I1209 05:18:34.353571 1340508 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1209 05:18:34.561497 1340508 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 05:18:34.631878 1340508 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1209 05:18:34.680059 1340508 api_server.go:52] waiting for apiserver process to appear ...
	I1209 05:18:34.680152 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:18:35.181390 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:18:35.681243 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:18:36.180285 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:18:36.681281 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:18:37.180288 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:18:37.681182 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:18:38.180313 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:18:38.681094 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:18:39.180292 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:18:39.680367 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:18:40.180694 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:18:40.680287 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:18:41.180327 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:18:41.680876 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:18:42.181320 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:18:42.680637 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:18:43.180795 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:18:43.681201 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:18:44.180876 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:18:44.680277 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:18:45.181332 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:18:45.680308 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:18:46.180393 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:18:46.680452 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:18:47.180917 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:18:47.680348 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:18:48.181216 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:18:48.680799 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:18:49.180383 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:18:49.681136 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:18:50.180309 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:18:50.681166 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:18:51.180851 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:18:51.680845 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:18:52.180290 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:18:52.680860 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:18:53.181043 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:18:53.680245 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:18:54.180842 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:18:54.680667 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:18:55.181104 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:18:55.681245 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:18:56.180314 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:18:56.681185 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:18:57.181008 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:18:57.680312 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:18:58.180378 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:18:58.681207 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:18:59.180278 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:18:59.681160 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:19:00.180382 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:19:00.680305 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:19:01.180389 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:19:01.681222 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:19:02.181210 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:19:02.680906 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:19:03.181208 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:19:03.680703 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:19:04.180248 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:19:04.680612 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:19:05.181145 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:19:05.681405 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:19:06.181048 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:19:06.688189 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:19:07.180281 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:19:07.681179 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:19:08.181207 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:19:08.680848 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:19:09.180262 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:19:09.680779 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:19:10.180717 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:19:10.683216 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:19:11.180928 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:19:11.680963 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:19:12.180928 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:19:12.681093 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:19:13.180243 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:19:13.680912 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:19:14.180803 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:19:14.680313 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:19:15.181223 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:19:15.680320 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:19:16.180909 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:19:16.681189 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:19:17.180832 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:19:17.681189 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:19:18.181238 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:19:18.681218 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:19:19.180515 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:19:19.680829 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:19:20.181257 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:19:20.681290 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:19:21.180919 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:19:21.681252 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:19:22.180236 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:19:22.680922 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:19:23.181064 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:19:23.680265 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:19:24.181075 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:19:24.683466 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:19:25.180364 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:19:25.681155 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:19:26.180876 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:19:26.680224 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:19:27.180631 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:19:27.681235 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:19:28.181089 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:19:28.680272 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:19:29.181002 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:19:29.681000 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:19:30.181226 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:19:30.680271 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:19:31.180913 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:19:31.681020 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:19:32.180903 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:19:32.680438 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:19:33.180851 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:19:33.681015 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:19:34.181227 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:19:34.680715 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:19:34.680798 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:19:34.728507 1340508 cri.go:89] found id: "7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547"
	I1209 05:19:34.728527 1340508 cri.go:89] found id: ""
	I1209 05:19:34.728535 1340508 logs.go:282] 1 containers: [7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547]
	I1209 05:19:34.728601 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:19:34.737123 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:19:34.737201 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:19:34.796939 1340508 cri.go:89] found id: "40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b"
	I1209 05:19:34.796958 1340508 cri.go:89] found id: ""
	I1209 05:19:34.796966 1340508 logs.go:282] 1 containers: [40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b]
	I1209 05:19:34.797023 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:19:34.800697 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:19:34.800777 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:19:34.830084 1340508 cri.go:89] found id: ""
	I1209 05:19:34.830115 1340508 logs.go:282] 0 containers: []
	W1209 05:19:34.830124 1340508 logs.go:284] No container was found matching "coredns"
	I1209 05:19:34.830131 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:19:34.830192 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:19:34.870831 1340508 cri.go:89] found id: "ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a"
	I1209 05:19:34.870856 1340508 cri.go:89] found id: ""
	I1209 05:19:34.870865 1340508 logs.go:282] 1 containers: [ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a]
	I1209 05:19:34.870918 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:19:34.875456 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:19:34.875522 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:19:34.906845 1340508 cri.go:89] found id: ""
	I1209 05:19:34.906873 1340508 logs.go:282] 0 containers: []
	W1209 05:19:34.906883 1340508 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:19:34.906889 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:19:34.906948 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:19:34.950282 1340508 cri.go:89] found id: "f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da"
	I1209 05:19:34.950306 1340508 cri.go:89] found id: ""
	I1209 05:19:34.950314 1340508 logs.go:282] 1 containers: [f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da]
	I1209 05:19:34.950402 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:19:34.955336 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:19:34.955425 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:19:34.998685 1340508 cri.go:89] found id: ""
	I1209 05:19:34.998710 1340508 logs.go:282] 0 containers: []
	W1209 05:19:34.998719 1340508 logs.go:284] No container was found matching "kindnet"
	I1209 05:19:34.998726 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:19:34.998792 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:19:35.040817 1340508 cri.go:89] found id: ""
	I1209 05:19:35.040841 1340508 logs.go:282] 0 containers: []
	W1209 05:19:35.040857 1340508 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:19:35.040872 1340508 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:19:35.040898 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:19:35.124747 1340508 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:19:35.124770 1340508 logs.go:123] Gathering logs for kube-apiserver [7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547] ...
	I1209 05:19:35.124784 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547"
	I1209 05:19:35.179805 1340508 logs.go:123] Gathering logs for kube-scheduler [ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a] ...
	I1209 05:19:35.179907 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a"
	I1209 05:19:35.218567 1340508 logs.go:123] Gathering logs for containerd ...
	I1209 05:19:35.218656 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:19:35.256582 1340508 logs.go:123] Gathering logs for container status ...
	I1209 05:19:35.256659 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:19:35.299220 1340508 logs.go:123] Gathering logs for kubelet ...
	I1209 05:19:35.299298 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:19:35.364009 1340508 logs.go:123] Gathering logs for dmesg ...
	I1209 05:19:35.364106 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:19:35.390432 1340508 logs.go:123] Gathering logs for etcd [40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b] ...
	I1209 05:19:35.390507 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b"
	I1209 05:19:35.446149 1340508 logs.go:123] Gathering logs for kube-controller-manager [f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da] ...
	I1209 05:19:35.446231 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da"
	I1209 05:19:38.023129 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:19:38.035708 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:19:38.035786 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:19:38.097296 1340508 cri.go:89] found id: "7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547"
	I1209 05:19:38.097315 1340508 cri.go:89] found id: ""
	I1209 05:19:38.097324 1340508 logs.go:282] 1 containers: [7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547]
	I1209 05:19:38.097383 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:19:38.101597 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:19:38.101669 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:19:38.158250 1340508 cri.go:89] found id: "40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b"
	I1209 05:19:38.158269 1340508 cri.go:89] found id: ""
	I1209 05:19:38.158277 1340508 logs.go:282] 1 containers: [40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b]
	I1209 05:19:38.158332 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:19:38.161869 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:19:38.161950 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:19:38.208246 1340508 cri.go:89] found id: ""
	I1209 05:19:38.208272 1340508 logs.go:282] 0 containers: []
	W1209 05:19:38.208281 1340508 logs.go:284] No container was found matching "coredns"
	I1209 05:19:38.208287 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:19:38.208346 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:19:38.257802 1340508 cri.go:89] found id: "ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a"
	I1209 05:19:38.257825 1340508 cri.go:89] found id: ""
	I1209 05:19:38.257834 1340508 logs.go:282] 1 containers: [ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a]
	I1209 05:19:38.257892 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:19:38.261548 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:19:38.261620 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:19:38.314070 1340508 cri.go:89] found id: ""
	I1209 05:19:38.314094 1340508 logs.go:282] 0 containers: []
	W1209 05:19:38.314103 1340508 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:19:38.314109 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:19:38.314167 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:19:38.366460 1340508 cri.go:89] found id: "f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da"
	I1209 05:19:38.366482 1340508 cri.go:89] found id: ""
	I1209 05:19:38.366491 1340508 logs.go:282] 1 containers: [f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da]
	I1209 05:19:38.366551 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:19:38.370482 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:19:38.370562 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:19:38.416247 1340508 cri.go:89] found id: ""
	I1209 05:19:38.416271 1340508 logs.go:282] 0 containers: []
	W1209 05:19:38.416280 1340508 logs.go:284] No container was found matching "kindnet"
	I1209 05:19:38.416287 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:19:38.416344 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:19:38.458749 1340508 cri.go:89] found id: ""
	I1209 05:19:38.458773 1340508 logs.go:282] 0 containers: []
	W1209 05:19:38.458791 1340508 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:19:38.458806 1340508 logs.go:123] Gathering logs for containerd ...
	I1209 05:19:38.458818 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:19:38.511929 1340508 logs.go:123] Gathering logs for kubelet ...
	I1209 05:19:38.511961 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:19:38.614792 1340508 logs.go:123] Gathering logs for kube-apiserver [7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547] ...
	I1209 05:19:38.614869 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547"
	I1209 05:19:38.716895 1340508 logs.go:123] Gathering logs for kube-scheduler [ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a] ...
	I1209 05:19:38.716970 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a"
	I1209 05:19:38.762926 1340508 logs.go:123] Gathering logs for kube-controller-manager [f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da] ...
	I1209 05:19:38.762998 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da"
	I1209 05:19:38.815183 1340508 logs.go:123] Gathering logs for container status ...
	I1209 05:19:38.815327 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:19:38.871148 1340508 logs.go:123] Gathering logs for dmesg ...
	I1209 05:19:38.871222 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:19:38.893625 1340508 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:19:38.893701 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:19:39.025224 1340508 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:19:39.025258 1340508 logs.go:123] Gathering logs for etcd [40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b] ...
	I1209 05:19:39.025286 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b"
	I1209 05:19:41.593438 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:19:41.603711 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:19:41.603777 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:19:41.638487 1340508 cri.go:89] found id: "7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547"
	I1209 05:19:41.638510 1340508 cri.go:89] found id: ""
	I1209 05:19:41.638518 1340508 logs.go:282] 1 containers: [7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547]
	I1209 05:19:41.638576 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:19:41.643524 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:19:41.643605 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:19:41.675131 1340508 cri.go:89] found id: "40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b"
	I1209 05:19:41.675149 1340508 cri.go:89] found id: ""
	I1209 05:19:41.675157 1340508 logs.go:282] 1 containers: [40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b]
	I1209 05:19:41.675213 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:19:41.680890 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:19:41.680985 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:19:41.774152 1340508 cri.go:89] found id: ""
	I1209 05:19:41.774177 1340508 logs.go:282] 0 containers: []
	W1209 05:19:41.774186 1340508 logs.go:284] No container was found matching "coredns"
	I1209 05:19:41.774193 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:19:41.774250 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:19:41.812621 1340508 cri.go:89] found id: "ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a"
	I1209 05:19:41.812639 1340508 cri.go:89] found id: ""
	I1209 05:19:41.812647 1340508 logs.go:282] 1 containers: [ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a]
	I1209 05:19:41.812702 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:19:41.817274 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:19:41.817345 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:19:41.862803 1340508 cri.go:89] found id: ""
	I1209 05:19:41.862826 1340508 logs.go:282] 0 containers: []
	W1209 05:19:41.862834 1340508 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:19:41.862839 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:19:41.862900 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:19:41.895617 1340508 cri.go:89] found id: "f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da"
	I1209 05:19:41.895694 1340508 cri.go:89] found id: ""
	I1209 05:19:41.895715 1340508 logs.go:282] 1 containers: [f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da]
	I1209 05:19:41.895815 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:19:41.900371 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:19:41.900490 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:19:41.953068 1340508 cri.go:89] found id: ""
	I1209 05:19:41.953148 1340508 logs.go:282] 0 containers: []
	W1209 05:19:41.953171 1340508 logs.go:284] No container was found matching "kindnet"
	I1209 05:19:41.953191 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:19:41.953350 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:19:41.983759 1340508 cri.go:89] found id: ""
	I1209 05:19:41.983833 1340508 logs.go:282] 0 containers: []
	W1209 05:19:41.983855 1340508 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:19:41.983880 1340508 logs.go:123] Gathering logs for containerd ...
	I1209 05:19:41.983927 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:19:42.018144 1340508 logs.go:123] Gathering logs for container status ...
	I1209 05:19:42.018226 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:19:42.059493 1340508 logs.go:123] Gathering logs for kubelet ...
	I1209 05:19:42.059521 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:19:42.129991 1340508 logs.go:123] Gathering logs for dmesg ...
	I1209 05:19:42.130088 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:19:42.151942 1340508 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:19:42.152477 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:19:42.307134 1340508 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:19:42.307208 1340508 logs.go:123] Gathering logs for kube-apiserver [7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547] ...
	I1209 05:19:42.307248 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547"
	I1209 05:19:42.349755 1340508 logs.go:123] Gathering logs for kube-scheduler [ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a] ...
	I1209 05:19:42.349840 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a"
	I1209 05:19:42.392608 1340508 logs.go:123] Gathering logs for kube-controller-manager [f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da] ...
	I1209 05:19:42.392694 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da"
	I1209 05:19:42.459984 1340508 logs.go:123] Gathering logs for etcd [40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b] ...
	I1209 05:19:42.460080 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b"
	I1209 05:19:45.013410 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:19:45.035618 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:19:45.035697 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:19:45.080196 1340508 cri.go:89] found id: "7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547"
	I1209 05:19:45.080218 1340508 cri.go:89] found id: ""
	I1209 05:19:45.080227 1340508 logs.go:282] 1 containers: [7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547]
	I1209 05:19:45.080294 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:19:45.090293 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:19:45.090386 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:19:45.132877 1340508 cri.go:89] found id: "40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b"
	I1209 05:19:45.132899 1340508 cri.go:89] found id: ""
	I1209 05:19:45.132908 1340508 logs.go:282] 1 containers: [40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b]
	I1209 05:19:45.132973 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:19:45.141709 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:19:45.141804 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:19:45.193508 1340508 cri.go:89] found id: ""
	I1209 05:19:45.193535 1340508 logs.go:282] 0 containers: []
	W1209 05:19:45.193544 1340508 logs.go:284] No container was found matching "coredns"
	I1209 05:19:45.193552 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:19:45.193621 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:19:45.294860 1340508 cri.go:89] found id: "ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a"
	I1209 05:19:45.294881 1340508 cri.go:89] found id: ""
	I1209 05:19:45.294889 1340508 logs.go:282] 1 containers: [ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a]
	I1209 05:19:45.294950 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:19:45.301441 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:19:45.301522 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:19:45.346220 1340508 cri.go:89] found id: ""
	I1209 05:19:45.346282 1340508 logs.go:282] 0 containers: []
	W1209 05:19:45.346341 1340508 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:19:45.346370 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:19:45.346508 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:19:45.401682 1340508 cri.go:89] found id: "f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da"
	I1209 05:19:45.401702 1340508 cri.go:89] found id: ""
	I1209 05:19:45.401711 1340508 logs.go:282] 1 containers: [f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da]
	I1209 05:19:45.401767 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:19:45.411068 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:19:45.411133 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:19:45.492256 1340508 cri.go:89] found id: ""
	I1209 05:19:45.492279 1340508 logs.go:282] 0 containers: []
	W1209 05:19:45.492288 1340508 logs.go:284] No container was found matching "kindnet"
	I1209 05:19:45.492294 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:19:45.492360 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:19:45.582244 1340508 cri.go:89] found id: ""
	I1209 05:19:45.582265 1340508 logs.go:282] 0 containers: []
	W1209 05:19:45.582273 1340508 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:19:45.582286 1340508 logs.go:123] Gathering logs for kubelet ...
	I1209 05:19:45.582297 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:19:45.680902 1340508 logs.go:123] Gathering logs for dmesg ...
	I1209 05:19:45.681005 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:19:45.706628 1340508 logs.go:123] Gathering logs for kube-apiserver [7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547] ...
	I1209 05:19:45.706656 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547"
	I1209 05:19:45.767147 1340508 logs.go:123] Gathering logs for etcd [40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b] ...
	I1209 05:19:45.767178 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b"
	I1209 05:19:45.832826 1340508 logs.go:123] Gathering logs for kube-scheduler [ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a] ...
	I1209 05:19:45.832903 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a"
	I1209 05:19:45.883401 1340508 logs.go:123] Gathering logs for kube-controller-manager [f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da] ...
	I1209 05:19:45.883470 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da"
	I1209 05:19:45.928287 1340508 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:19:45.928316 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:19:46.037799 1340508 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:19:46.037821 1340508 logs.go:123] Gathering logs for containerd ...
	I1209 05:19:46.037833 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:19:46.090699 1340508 logs.go:123] Gathering logs for container status ...
	I1209 05:19:46.090783 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:19:48.636138 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:19:48.646871 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:19:48.646940 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:19:48.678109 1340508 cri.go:89] found id: "7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547"
	I1209 05:19:48.678132 1340508 cri.go:89] found id: ""
	I1209 05:19:48.678142 1340508 logs.go:282] 1 containers: [7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547]
	I1209 05:19:48.678218 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:19:48.682407 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:19:48.682477 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:19:48.709252 1340508 cri.go:89] found id: "40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b"
	I1209 05:19:48.709270 1340508 cri.go:89] found id: ""
	I1209 05:19:48.709279 1340508 logs.go:282] 1 containers: [40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b]
	I1209 05:19:48.709338 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:19:48.713427 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:19:48.713500 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:19:48.745768 1340508 cri.go:89] found id: ""
	I1209 05:19:48.745789 1340508 logs.go:282] 0 containers: []
	W1209 05:19:48.745797 1340508 logs.go:284] No container was found matching "coredns"
	I1209 05:19:48.745803 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:19:48.745858 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:19:48.812285 1340508 cri.go:89] found id: "ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a"
	I1209 05:19:48.812311 1340508 cri.go:89] found id: ""
	I1209 05:19:48.812319 1340508 logs.go:282] 1 containers: [ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a]
	I1209 05:19:48.812383 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:19:48.817921 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:19:48.818002 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:19:48.867184 1340508 cri.go:89] found id: ""
	I1209 05:19:48.867221 1340508 logs.go:282] 0 containers: []
	W1209 05:19:48.867230 1340508 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:19:48.867237 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:19:48.867299 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:19:48.906226 1340508 cri.go:89] found id: "f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da"
	I1209 05:19:48.906245 1340508 cri.go:89] found id: ""
	I1209 05:19:48.906252 1340508 logs.go:282] 1 containers: [f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da]
	I1209 05:19:48.906318 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:19:48.912244 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:19:48.912333 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:19:48.949977 1340508 cri.go:89] found id: ""
	I1209 05:19:48.950004 1340508 logs.go:282] 0 containers: []
	W1209 05:19:48.950013 1340508 logs.go:284] No container was found matching "kindnet"
	I1209 05:19:48.950025 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:19:48.950093 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:19:48.985224 1340508 cri.go:89] found id: ""
	I1209 05:19:48.985256 1340508 logs.go:282] 0 containers: []
	W1209 05:19:48.985265 1340508 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:19:48.985309 1340508 logs.go:123] Gathering logs for containerd ...
	I1209 05:19:48.985323 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:19:49.022236 1340508 logs.go:123] Gathering logs for container status ...
	I1209 05:19:49.022270 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:19:49.081237 1340508 logs.go:123] Gathering logs for dmesg ...
	I1209 05:19:49.081265 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:19:49.110062 1340508 logs.go:123] Gathering logs for kube-apiserver [7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547] ...
	I1209 05:19:49.110092 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547"
	I1209 05:19:49.166003 1340508 logs.go:123] Gathering logs for etcd [40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b] ...
	I1209 05:19:49.166055 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b"
	I1209 05:19:49.244912 1340508 logs.go:123] Gathering logs for kube-controller-manager [f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da] ...
	I1209 05:19:49.244944 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da"
	I1209 05:19:49.330273 1340508 logs.go:123] Gathering logs for kubelet ...
	I1209 05:19:49.330463 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:19:49.393810 1340508 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:19:49.393900 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:19:49.483999 1340508 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:19:49.484143 1340508 logs.go:123] Gathering logs for kube-scheduler [ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a] ...
	I1209 05:19:49.484172 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a"
	I1209 05:19:52.020918 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:19:52.032869 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:19:52.032942 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:19:52.063133 1340508 cri.go:89] found id: "7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547"
	I1209 05:19:52.063156 1340508 cri.go:89] found id: ""
	I1209 05:19:52.063166 1340508 logs.go:282] 1 containers: [7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547]
	I1209 05:19:52.063223 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:19:52.067246 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:19:52.067326 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:19:52.097577 1340508 cri.go:89] found id: "40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b"
	I1209 05:19:52.097600 1340508 cri.go:89] found id: ""
	I1209 05:19:52.097608 1340508 logs.go:282] 1 containers: [40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b]
	I1209 05:19:52.097667 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:19:52.101573 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:19:52.101653 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:19:52.136253 1340508 cri.go:89] found id: ""
	I1209 05:19:52.136281 1340508 logs.go:282] 0 containers: []
	W1209 05:19:52.136290 1340508 logs.go:284] No container was found matching "coredns"
	I1209 05:19:52.136297 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:19:52.136355 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:19:52.175685 1340508 cri.go:89] found id: "ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a"
	I1209 05:19:52.175708 1340508 cri.go:89] found id: ""
	I1209 05:19:52.175718 1340508 logs.go:282] 1 containers: [ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a]
	I1209 05:19:52.175777 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:19:52.179909 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:19:52.179985 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:19:52.240359 1340508 cri.go:89] found id: ""
	I1209 05:19:52.240384 1340508 logs.go:282] 0 containers: []
	W1209 05:19:52.240392 1340508 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:19:52.240399 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:19:52.240457 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:19:52.276458 1340508 cri.go:89] found id: "f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da"
	I1209 05:19:52.276480 1340508 cri.go:89] found id: ""
	I1209 05:19:52.276489 1340508 logs.go:282] 1 containers: [f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da]
	I1209 05:19:52.276546 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:19:52.280692 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:19:52.280770 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:19:52.314100 1340508 cri.go:89] found id: ""
	I1209 05:19:52.314146 1340508 logs.go:282] 0 containers: []
	W1209 05:19:52.314156 1340508 logs.go:284] No container was found matching "kindnet"
	I1209 05:19:52.314162 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:19:52.314230 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:19:52.342904 1340508 cri.go:89] found id: ""
	I1209 05:19:52.342930 1340508 logs.go:282] 0 containers: []
	W1209 05:19:52.342939 1340508 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:19:52.342971 1340508 logs.go:123] Gathering logs for containerd ...
	I1209 05:19:52.342990 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:19:52.373313 1340508 logs.go:123] Gathering logs for kubelet ...
	I1209 05:19:52.373389 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:19:52.441765 1340508 logs.go:123] Gathering logs for kube-apiserver [7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547] ...
	I1209 05:19:52.441840 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547"
	I1209 05:19:52.484448 1340508 logs.go:123] Gathering logs for container status ...
	I1209 05:19:52.484526 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:19:52.530267 1340508 logs.go:123] Gathering logs for dmesg ...
	I1209 05:19:52.530293 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:19:52.547135 1340508 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:19:52.547213 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:19:52.612890 1340508 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:19:52.612917 1340508 logs.go:123] Gathering logs for etcd [40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b] ...
	I1209 05:19:52.612931 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b"
	I1209 05:19:52.654981 1340508 logs.go:123] Gathering logs for kube-scheduler [ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a] ...
	I1209 05:19:52.655009 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a"
	I1209 05:19:52.696894 1340508 logs.go:123] Gathering logs for kube-controller-manager [f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da] ...
	I1209 05:19:52.696924 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da"
	I1209 05:19:55.237179 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:19:55.248218 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:19:55.248290 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:19:55.288717 1340508 cri.go:89] found id: "7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547"
	I1209 05:19:55.288738 1340508 cri.go:89] found id: ""
	I1209 05:19:55.288746 1340508 logs.go:282] 1 containers: [7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547]
	I1209 05:19:55.288801 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:19:55.294358 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:19:55.294426 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:19:55.342469 1340508 cri.go:89] found id: "40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b"
	I1209 05:19:55.342493 1340508 cri.go:89] found id: ""
	I1209 05:19:55.342501 1340508 logs.go:282] 1 containers: [40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b]
	I1209 05:19:55.342558 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:19:55.347743 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:19:55.347820 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:19:55.392652 1340508 cri.go:89] found id: ""
	I1209 05:19:55.392676 1340508 logs.go:282] 0 containers: []
	W1209 05:19:55.392684 1340508 logs.go:284] No container was found matching "coredns"
	I1209 05:19:55.392691 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:19:55.392748 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:19:55.430737 1340508 cri.go:89] found id: "ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a"
	I1209 05:19:55.430759 1340508 cri.go:89] found id: ""
	I1209 05:19:55.430768 1340508 logs.go:282] 1 containers: [ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a]
	I1209 05:19:55.430823 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:19:55.434708 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:19:55.434779 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:19:55.470226 1340508 cri.go:89] found id: ""
	I1209 05:19:55.470249 1340508 logs.go:282] 0 containers: []
	W1209 05:19:55.470258 1340508 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:19:55.470265 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:19:55.470328 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:19:55.525325 1340508 cri.go:89] found id: "f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da"
	I1209 05:19:55.525345 1340508 cri.go:89] found id: ""
	I1209 05:19:55.525353 1340508 logs.go:282] 1 containers: [f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da]
	I1209 05:19:55.525412 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:19:55.534603 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:19:55.534672 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:19:55.574522 1340508 cri.go:89] found id: ""
	I1209 05:19:55.574544 1340508 logs.go:282] 0 containers: []
	W1209 05:19:55.574552 1340508 logs.go:284] No container was found matching "kindnet"
	I1209 05:19:55.574558 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:19:55.574619 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:19:55.604115 1340508 cri.go:89] found id: ""
	I1209 05:19:55.604138 1340508 logs.go:282] 0 containers: []
	W1209 05:19:55.604145 1340508 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:19:55.604164 1340508 logs.go:123] Gathering logs for kubelet ...
	I1209 05:19:55.604174 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:19:55.671752 1340508 logs.go:123] Gathering logs for dmesg ...
	I1209 05:19:55.671790 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:19:55.689405 1340508 logs.go:123] Gathering logs for kube-scheduler [ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a] ...
	I1209 05:19:55.689434 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a"
	I1209 05:19:55.725510 1340508 logs.go:123] Gathering logs for kube-controller-manager [f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da] ...
	I1209 05:19:55.725542 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da"
	I1209 05:19:55.778057 1340508 logs.go:123] Gathering logs for container status ...
	I1209 05:19:55.778086 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:19:55.823451 1340508 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:19:55.823479 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:19:55.908959 1340508 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:19:55.908983 1340508 logs.go:123] Gathering logs for kube-apiserver [7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547] ...
	I1209 05:19:55.908997 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547"
	I1209 05:19:55.976956 1340508 logs.go:123] Gathering logs for etcd [40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b] ...
	I1209 05:19:55.976989 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b"
	I1209 05:19:56.023961 1340508 logs.go:123] Gathering logs for containerd ...
	I1209 05:19:56.023996 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:19:58.564798 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:19:58.575435 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:19:58.575511 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:19:58.612380 1340508 cri.go:89] found id: "7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547"
	I1209 05:19:58.612404 1340508 cri.go:89] found id: ""
	I1209 05:19:58.612413 1340508 logs.go:282] 1 containers: [7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547]
	I1209 05:19:58.612477 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:19:58.616805 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:19:58.616877 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:19:58.663675 1340508 cri.go:89] found id: "40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b"
	I1209 05:19:58.663698 1340508 cri.go:89] found id: ""
	I1209 05:19:58.663706 1340508 logs.go:282] 1 containers: [40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b]
	I1209 05:19:58.663773 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:19:58.668693 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:19:58.668781 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:19:58.710276 1340508 cri.go:89] found id: ""
	I1209 05:19:58.710301 1340508 logs.go:282] 0 containers: []
	W1209 05:19:58.710309 1340508 logs.go:284] No container was found matching "coredns"
	I1209 05:19:58.710316 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:19:58.710375 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:19:58.748959 1340508 cri.go:89] found id: "ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a"
	I1209 05:19:58.748988 1340508 cri.go:89] found id: ""
	I1209 05:19:58.748997 1340508 logs.go:282] 1 containers: [ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a]
	I1209 05:19:58.749075 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:19:58.753730 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:19:58.753808 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:19:58.786672 1340508 cri.go:89] found id: ""
	I1209 05:19:58.786702 1340508 logs.go:282] 0 containers: []
	W1209 05:19:58.786713 1340508 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:19:58.786720 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:19:58.786799 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:19:58.817822 1340508 cri.go:89] found id: "f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da"
	I1209 05:19:58.817848 1340508 cri.go:89] found id: ""
	I1209 05:19:58.817862 1340508 logs.go:282] 1 containers: [f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da]
	I1209 05:19:58.817938 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:19:58.822860 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:19:58.822948 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:19:58.862709 1340508 cri.go:89] found id: ""
	I1209 05:19:58.862743 1340508 logs.go:282] 0 containers: []
	W1209 05:19:58.862753 1340508 logs.go:284] No container was found matching "kindnet"
	I1209 05:19:58.862766 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:19:58.862826 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:19:58.893125 1340508 cri.go:89] found id: ""
	I1209 05:19:58.893152 1340508 logs.go:282] 0 containers: []
	W1209 05:19:58.893161 1340508 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:19:58.893174 1340508 logs.go:123] Gathering logs for kube-controller-manager [f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da] ...
	I1209 05:19:58.893190 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da"
	I1209 05:19:58.925679 1340508 logs.go:123] Gathering logs for containerd ...
	I1209 05:19:58.925712 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:19:58.973337 1340508 logs.go:123] Gathering logs for kubelet ...
	I1209 05:19:58.973376 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:19:59.084675 1340508 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:19:59.084716 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:19:59.178042 1340508 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:19:59.178066 1340508 logs.go:123] Gathering logs for etcd [40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b] ...
	I1209 05:19:59.178082 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b"
	I1209 05:19:59.224230 1340508 logs.go:123] Gathering logs for container status ...
	I1209 05:19:59.224266 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:19:59.287498 1340508 logs.go:123] Gathering logs for dmesg ...
	I1209 05:19:59.287526 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:19:59.304712 1340508 logs.go:123] Gathering logs for kube-apiserver [7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547] ...
	I1209 05:19:59.304748 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547"
	I1209 05:19:59.340214 1340508 logs.go:123] Gathering logs for kube-scheduler [ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a] ...
	I1209 05:19:59.340244 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a"
	I1209 05:20:01.882857 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:20:01.897483 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:20:01.897569 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:20:01.949283 1340508 cri.go:89] found id: "7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547"
	I1209 05:20:01.949309 1340508 cri.go:89] found id: ""
	I1209 05:20:01.949319 1340508 logs.go:282] 1 containers: [7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547]
	I1209 05:20:01.949386 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:20:01.954135 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:20:01.954231 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:20:01.996496 1340508 cri.go:89] found id: "40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b"
	I1209 05:20:01.996521 1340508 cri.go:89] found id: ""
	I1209 05:20:01.996530 1340508 logs.go:282] 1 containers: [40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b]
	I1209 05:20:01.996599 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:20:02.008639 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:20:02.008730 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:20:02.075432 1340508 cri.go:89] found id: ""
	I1209 05:20:02.075459 1340508 logs.go:282] 0 containers: []
	W1209 05:20:02.075468 1340508 logs.go:284] No container was found matching "coredns"
	I1209 05:20:02.075475 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:20:02.075536 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:20:02.153959 1340508 cri.go:89] found id: "ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a"
	I1209 05:20:02.153981 1340508 cri.go:89] found id: ""
	I1209 05:20:02.153989 1340508 logs.go:282] 1 containers: [ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a]
	I1209 05:20:02.154048 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:20:02.157944 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:20:02.158018 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:20:02.216498 1340508 cri.go:89] found id: ""
	I1209 05:20:02.216525 1340508 logs.go:282] 0 containers: []
	W1209 05:20:02.216533 1340508 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:20:02.216539 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:20:02.216609 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:20:02.267193 1340508 cri.go:89] found id: "f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da"
	I1209 05:20:02.267216 1340508 cri.go:89] found id: ""
	I1209 05:20:02.267226 1340508 logs.go:282] 1 containers: [f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da]
	I1209 05:20:02.267286 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:20:02.277560 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:20:02.277647 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:20:02.331040 1340508 cri.go:89] found id: ""
	I1209 05:20:02.331069 1340508 logs.go:282] 0 containers: []
	W1209 05:20:02.331082 1340508 logs.go:284] No container was found matching "kindnet"
	I1209 05:20:02.331088 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:20:02.331155 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:20:02.383112 1340508 cri.go:89] found id: ""
	I1209 05:20:02.383138 1340508 logs.go:282] 0 containers: []
	W1209 05:20:02.383146 1340508 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:20:02.383159 1340508 logs.go:123] Gathering logs for kubelet ...
	I1209 05:20:02.383171 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:20:02.481550 1340508 logs.go:123] Gathering logs for dmesg ...
	I1209 05:20:02.481586 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:20:02.503627 1340508 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:20:02.503656 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:20:02.635965 1340508 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:20:02.635984 1340508 logs.go:123] Gathering logs for etcd [40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b] ...
	I1209 05:20:02.635997 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b"
	I1209 05:20:02.682092 1340508 logs.go:123] Gathering logs for kube-scheduler [ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a] ...
	I1209 05:20:02.682124 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a"
	I1209 05:20:02.745919 1340508 logs.go:123] Gathering logs for kube-controller-manager [f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da] ...
	I1209 05:20:02.745952 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da"
	I1209 05:20:02.804328 1340508 logs.go:123] Gathering logs for containerd ...
	I1209 05:20:02.804359 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:20:02.839529 1340508 logs.go:123] Gathering logs for kube-apiserver [7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547] ...
	I1209 05:20:02.839566 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547"
	I1209 05:20:02.889548 1340508 logs.go:123] Gathering logs for container status ...
	I1209 05:20:02.889580 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:20:05.446315 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:20:05.456366 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:20:05.456435 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:20:05.482074 1340508 cri.go:89] found id: "7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547"
	I1209 05:20:05.482093 1340508 cri.go:89] found id: ""
	I1209 05:20:05.482101 1340508 logs.go:282] 1 containers: [7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547]
	I1209 05:20:05.482156 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:20:05.485714 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:20:05.485781 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:20:05.510042 1340508 cri.go:89] found id: "40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b"
	I1209 05:20:05.510063 1340508 cri.go:89] found id: ""
	I1209 05:20:05.510071 1340508 logs.go:282] 1 containers: [40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b]
	I1209 05:20:05.510147 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:20:05.513955 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:20:05.514025 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:20:05.538815 1340508 cri.go:89] found id: ""
	I1209 05:20:05.538837 1340508 logs.go:282] 0 containers: []
	W1209 05:20:05.538846 1340508 logs.go:284] No container was found matching "coredns"
	I1209 05:20:05.538852 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:20:05.538916 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:20:05.562962 1340508 cri.go:89] found id: "ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a"
	I1209 05:20:05.562985 1340508 cri.go:89] found id: ""
	I1209 05:20:05.562993 1340508 logs.go:282] 1 containers: [ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a]
	I1209 05:20:05.563051 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:20:05.566638 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:20:05.566705 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:20:05.592432 1340508 cri.go:89] found id: ""
	I1209 05:20:05.592456 1340508 logs.go:282] 0 containers: []
	W1209 05:20:05.592465 1340508 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:20:05.592471 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:20:05.592536 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:20:05.616397 1340508 cri.go:89] found id: "f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da"
	I1209 05:20:05.616419 1340508 cri.go:89] found id: ""
	I1209 05:20:05.616428 1340508 logs.go:282] 1 containers: [f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da]
	I1209 05:20:05.616495 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:20:05.620148 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:20:05.620229 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:20:05.644632 1340508 cri.go:89] found id: ""
	I1209 05:20:05.644655 1340508 logs.go:282] 0 containers: []
	W1209 05:20:05.644663 1340508 logs.go:284] No container was found matching "kindnet"
	I1209 05:20:05.644669 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:20:05.644732 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:20:05.673795 1340508 cri.go:89] found id: ""
	I1209 05:20:05.673820 1340508 logs.go:282] 0 containers: []
	W1209 05:20:05.673830 1340508 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:20:05.673853 1340508 logs.go:123] Gathering logs for kubelet ...
	I1209 05:20:05.673865 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:20:05.731686 1340508 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:20:05.731721 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:20:05.795285 1340508 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:20:05.795307 1340508 logs.go:123] Gathering logs for kube-apiserver [7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547] ...
	I1209 05:20:05.795320 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547"
	I1209 05:20:05.828925 1340508 logs.go:123] Gathering logs for etcd [40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b] ...
	I1209 05:20:05.828958 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b"
	I1209 05:20:05.862199 1340508 logs.go:123] Gathering logs for kube-scheduler [ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a] ...
	I1209 05:20:05.862231 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a"
	I1209 05:20:05.894205 1340508 logs.go:123] Gathering logs for kube-controller-manager [f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da] ...
	I1209 05:20:05.894238 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da"
	I1209 05:20:05.923450 1340508 logs.go:123] Gathering logs for container status ...
	I1209 05:20:05.923482 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:20:05.987716 1340508 logs.go:123] Gathering logs for dmesg ...
	I1209 05:20:05.987744 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:20:06.006978 1340508 logs.go:123] Gathering logs for containerd ...
	I1209 05:20:06.007014 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:20:08.540850 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:20:08.550410 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:20:08.550482 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:20:08.576261 1340508 cri.go:89] found id: "7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547"
	I1209 05:20:08.576324 1340508 cri.go:89] found id: ""
	I1209 05:20:08.576346 1340508 logs.go:282] 1 containers: [7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547]
	I1209 05:20:08.576430 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:20:08.579957 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:20:08.580075 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:20:08.604986 1340508 cri.go:89] found id: "40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b"
	I1209 05:20:08.605009 1340508 cri.go:89] found id: ""
	I1209 05:20:08.605018 1340508 logs.go:282] 1 containers: [40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b]
	I1209 05:20:08.605100 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:20:08.608718 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:20:08.608795 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:20:08.632724 1340508 cri.go:89] found id: ""
	I1209 05:20:08.632753 1340508 logs.go:282] 0 containers: []
	W1209 05:20:08.632763 1340508 logs.go:284] No container was found matching "coredns"
	I1209 05:20:08.632769 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:20:08.632832 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:20:08.662888 1340508 cri.go:89] found id: "ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a"
	I1209 05:20:08.662907 1340508 cri.go:89] found id: ""
	I1209 05:20:08.662915 1340508 logs.go:282] 1 containers: [ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a]
	I1209 05:20:08.662969 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:20:08.666584 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:20:08.666653 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:20:08.692165 1340508 cri.go:89] found id: ""
	I1209 05:20:08.692187 1340508 logs.go:282] 0 containers: []
	W1209 05:20:08.692195 1340508 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:20:08.692202 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:20:08.692260 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:20:08.716786 1340508 cri.go:89] found id: "f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da"
	I1209 05:20:08.716809 1340508 cri.go:89] found id: ""
	I1209 05:20:08.716817 1340508 logs.go:282] 1 containers: [f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da]
	I1209 05:20:08.716873 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:20:08.720475 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:20:08.720575 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:20:08.744616 1340508 cri.go:89] found id: ""
	I1209 05:20:08.744685 1340508 logs.go:282] 0 containers: []
	W1209 05:20:08.744701 1340508 logs.go:284] No container was found matching "kindnet"
	I1209 05:20:08.744709 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:20:08.744772 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:20:08.768297 1340508 cri.go:89] found id: ""
	I1209 05:20:08.768320 1340508 logs.go:282] 0 containers: []
	W1209 05:20:08.768328 1340508 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:20:08.768342 1340508 logs.go:123] Gathering logs for containerd ...
	I1209 05:20:08.768353 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:20:08.796710 1340508 logs.go:123] Gathering logs for kubelet ...
	I1209 05:20:08.796745 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:20:08.854798 1340508 logs.go:123] Gathering logs for dmesg ...
	I1209 05:20:08.854833 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:20:08.871922 1340508 logs.go:123] Gathering logs for etcd [40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b] ...
	I1209 05:20:08.871951 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b"
	I1209 05:20:08.903994 1340508 logs.go:123] Gathering logs for kube-controller-manager [f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da] ...
	I1209 05:20:08.904040 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da"
	I1209 05:20:08.935823 1340508 logs.go:123] Gathering logs for container status ...
	I1209 05:20:08.935850 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:20:08.983141 1340508 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:20:08.983168 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:20:09.052221 1340508 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:20:09.052252 1340508 logs.go:123] Gathering logs for kube-apiserver [7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547] ...
	I1209 05:20:09.052267 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547"
	I1209 05:20:09.085599 1340508 logs.go:123] Gathering logs for kube-scheduler [ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a] ...
	I1209 05:20:09.085638 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a"
	I1209 05:20:11.616869 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:20:11.626757 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:20:11.626828 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:20:11.650125 1340508 cri.go:89] found id: "7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547"
	I1209 05:20:11.650149 1340508 cri.go:89] found id: ""
	I1209 05:20:11.650158 1340508 logs.go:282] 1 containers: [7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547]
	I1209 05:20:11.650245 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:20:11.653872 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:20:11.653946 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:20:11.679131 1340508 cri.go:89] found id: "40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b"
	I1209 05:20:11.679153 1340508 cri.go:89] found id: ""
	I1209 05:20:11.679162 1340508 logs.go:282] 1 containers: [40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b]
	I1209 05:20:11.679218 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:20:11.682852 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:20:11.682930 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:20:11.707652 1340508 cri.go:89] found id: ""
	I1209 05:20:11.707676 1340508 logs.go:282] 0 containers: []
	W1209 05:20:11.707684 1340508 logs.go:284] No container was found matching "coredns"
	I1209 05:20:11.707690 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:20:11.707749 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:20:11.732589 1340508 cri.go:89] found id: "ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a"
	I1209 05:20:11.732610 1340508 cri.go:89] found id: ""
	I1209 05:20:11.732618 1340508 logs.go:282] 1 containers: [ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a]
	I1209 05:20:11.732690 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:20:11.736299 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:20:11.736412 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:20:11.760972 1340508 cri.go:89] found id: ""
	I1209 05:20:11.761037 1340508 logs.go:282] 0 containers: []
	W1209 05:20:11.761062 1340508 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:20:11.761081 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:20:11.761167 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:20:11.785879 1340508 cri.go:89] found id: "f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da"
	I1209 05:20:11.785901 1340508 cri.go:89] found id: ""
	I1209 05:20:11.785910 1340508 logs.go:282] 1 containers: [f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da]
	I1209 05:20:11.785968 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:20:11.789726 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:20:11.789800 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:20:11.813521 1340508 cri.go:89] found id: ""
	I1209 05:20:11.813588 1340508 logs.go:282] 0 containers: []
	W1209 05:20:11.813605 1340508 logs.go:284] No container was found matching "kindnet"
	I1209 05:20:11.813613 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:20:11.813678 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:20:11.837644 1340508 cri.go:89] found id: ""
	I1209 05:20:11.837682 1340508 logs.go:282] 0 containers: []
	W1209 05:20:11.837692 1340508 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:20:11.837730 1340508 logs.go:123] Gathering logs for etcd [40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b] ...
	I1209 05:20:11.837761 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b"
	I1209 05:20:11.868657 1340508 logs.go:123] Gathering logs for containerd ...
	I1209 05:20:11.868687 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:20:11.897043 1340508 logs.go:123] Gathering logs for kubelet ...
	I1209 05:20:11.897077 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:20:11.972541 1340508 logs.go:123] Gathering logs for dmesg ...
	I1209 05:20:11.972630 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:20:11.991851 1340508 logs.go:123] Gathering logs for kube-apiserver [7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547] ...
	I1209 05:20:11.991935 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547"
	I1209 05:20:12.037750 1340508 logs.go:123] Gathering logs for kube-scheduler [ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a] ...
	I1209 05:20:12.037786 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a"
	I1209 05:20:12.072598 1340508 logs.go:123] Gathering logs for kube-controller-manager [f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da] ...
	I1209 05:20:12.072631 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da"
	I1209 05:20:12.103573 1340508 logs.go:123] Gathering logs for container status ...
	I1209 05:20:12.103677 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:20:12.132404 1340508 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:20:12.132435 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:20:12.195431 1340508 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:20:14.696350 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:20:14.706091 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:20:14.706166 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:20:14.730358 1340508 cri.go:89] found id: "7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547"
	I1209 05:20:14.730378 1340508 cri.go:89] found id: ""
	I1209 05:20:14.730386 1340508 logs.go:282] 1 containers: [7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547]
	I1209 05:20:14.730440 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:20:14.734006 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:20:14.734082 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:20:14.758506 1340508 cri.go:89] found id: "40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b"
	I1209 05:20:14.758577 1340508 cri.go:89] found id: ""
	I1209 05:20:14.758614 1340508 logs.go:282] 1 containers: [40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b]
	I1209 05:20:14.758705 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:20:14.762375 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:20:14.762449 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:20:14.788442 1340508 cri.go:89] found id: ""
	I1209 05:20:14.788465 1340508 logs.go:282] 0 containers: []
	W1209 05:20:14.788474 1340508 logs.go:284] No container was found matching "coredns"
	I1209 05:20:14.788481 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:20:14.788539 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:20:14.817073 1340508 cri.go:89] found id: "ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a"
	I1209 05:20:14.817094 1340508 cri.go:89] found id: ""
	I1209 05:20:14.817103 1340508 logs.go:282] 1 containers: [ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a]
	I1209 05:20:14.817158 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:20:14.820566 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:20:14.820633 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:20:14.844293 1340508 cri.go:89] found id: ""
	I1209 05:20:14.844316 1340508 logs.go:282] 0 containers: []
	W1209 05:20:14.844324 1340508 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:20:14.844330 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:20:14.844388 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:20:14.868748 1340508 cri.go:89] found id: "f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da"
	I1209 05:20:14.868771 1340508 cri.go:89] found id: ""
	I1209 05:20:14.868780 1340508 logs.go:282] 1 containers: [f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da]
	I1209 05:20:14.868840 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:20:14.872664 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:20:14.872739 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:20:14.895920 1340508 cri.go:89] found id: ""
	I1209 05:20:14.895946 1340508 logs.go:282] 0 containers: []
	W1209 05:20:14.895955 1340508 logs.go:284] No container was found matching "kindnet"
	I1209 05:20:14.895961 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:20:14.896034 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:20:14.921768 1340508 cri.go:89] found id: ""
	I1209 05:20:14.921793 1340508 logs.go:282] 0 containers: []
	W1209 05:20:14.921802 1340508 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:20:14.921832 1340508 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:20:14.921851 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:20:15.012135 1340508 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:20:15.012213 1340508 logs.go:123] Gathering logs for etcd [40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b] ...
	I1209 05:20:15.012244 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b"
	I1209 05:20:15.052447 1340508 logs.go:123] Gathering logs for kube-scheduler [ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a] ...
	I1209 05:20:15.052478 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a"
	I1209 05:20:15.087803 1340508 logs.go:123] Gathering logs for kube-controller-manager [f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da] ...
	I1209 05:20:15.087843 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da"
	I1209 05:20:15.116147 1340508 logs.go:123] Gathering logs for containerd ...
	I1209 05:20:15.116178 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:20:15.145012 1340508 logs.go:123] Gathering logs for kubelet ...
	I1209 05:20:15.145049 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:20:15.203973 1340508 logs.go:123] Gathering logs for dmesg ...
	I1209 05:20:15.204009 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:20:15.220339 1340508 logs.go:123] Gathering logs for kube-apiserver [7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547] ...
	I1209 05:20:15.220371 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547"
	I1209 05:20:15.254087 1340508 logs.go:123] Gathering logs for container status ...
	I1209 05:20:15.254118 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:20:17.783800 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:20:17.793660 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:20:17.793730 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:20:17.816791 1340508 cri.go:89] found id: "7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547"
	I1209 05:20:17.816811 1340508 cri.go:89] found id: ""
	I1209 05:20:17.816819 1340508 logs.go:282] 1 containers: [7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547]
	I1209 05:20:17.816873 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:20:17.820418 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:20:17.820489 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:20:17.844426 1340508 cri.go:89] found id: "40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b"
	I1209 05:20:17.844447 1340508 cri.go:89] found id: ""
	I1209 05:20:17.844456 1340508 logs.go:282] 1 containers: [40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b]
	I1209 05:20:17.844510 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:20:17.847885 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:20:17.847952 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:20:17.873844 1340508 cri.go:89] found id: ""
	I1209 05:20:17.873867 1340508 logs.go:282] 0 containers: []
	W1209 05:20:17.873875 1340508 logs.go:284] No container was found matching "coredns"
	I1209 05:20:17.873881 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:20:17.873938 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:20:17.897885 1340508 cri.go:89] found id: "ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a"
	I1209 05:20:17.897909 1340508 cri.go:89] found id: ""
	I1209 05:20:17.897917 1340508 logs.go:282] 1 containers: [ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a]
	I1209 05:20:17.897981 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:20:17.901293 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:20:17.901358 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:20:17.923976 1340508 cri.go:89] found id: ""
	I1209 05:20:17.923999 1340508 logs.go:282] 0 containers: []
	W1209 05:20:17.924007 1340508 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:20:17.924036 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:20:17.924096 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:20:17.964429 1340508 cri.go:89] found id: "f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da"
	I1209 05:20:17.964453 1340508 cri.go:89] found id: ""
	I1209 05:20:17.964462 1340508 logs.go:282] 1 containers: [f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da]
	I1209 05:20:17.964518 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:20:17.968235 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:20:17.968306 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:20:17.994375 1340508 cri.go:89] found id: ""
	I1209 05:20:17.994399 1340508 logs.go:282] 0 containers: []
	W1209 05:20:17.994407 1340508 logs.go:284] No container was found matching "kindnet"
	I1209 05:20:17.994414 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:20:17.994473 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:20:18.026055 1340508 cri.go:89] found id: ""
	I1209 05:20:18.026127 1340508 logs.go:282] 0 containers: []
	W1209 05:20:18.026142 1340508 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:20:18.026157 1340508 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:20:18.026170 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:20:18.085526 1340508 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:20:18.085546 1340508 logs.go:123] Gathering logs for kube-controller-manager [f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da] ...
	I1209 05:20:18.085578 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da"
	I1209 05:20:18.113302 1340508 logs.go:123] Gathering logs for containerd ...
	I1209 05:20:18.113330 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:20:18.142161 1340508 logs.go:123] Gathering logs for container status ...
	I1209 05:20:18.142195 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:20:18.169914 1340508 logs.go:123] Gathering logs for kubelet ...
	I1209 05:20:18.169942 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:20:18.228030 1340508 logs.go:123] Gathering logs for dmesg ...
	I1209 05:20:18.228066 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:20:18.244221 1340508 logs.go:123] Gathering logs for kube-apiserver [7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547] ...
	I1209 05:20:18.244248 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547"
	I1209 05:20:18.281497 1340508 logs.go:123] Gathering logs for etcd [40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b] ...
	I1209 05:20:18.281525 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b"
	I1209 05:20:18.322078 1340508 logs.go:123] Gathering logs for kube-scheduler [ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a] ...
	I1209 05:20:18.322106 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a"
	I1209 05:20:20.857211 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:20:20.866787 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:20:20.866859 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:20:20.893705 1340508 cri.go:89] found id: "7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547"
	I1209 05:20:20.893730 1340508 cri.go:89] found id: ""
	I1209 05:20:20.893738 1340508 logs.go:282] 1 containers: [7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547]
	I1209 05:20:20.893802 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:20:20.897284 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:20:20.897351 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:20:20.920856 1340508 cri.go:89] found id: "40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b"
	I1209 05:20:20.920876 1340508 cri.go:89] found id: ""
	I1209 05:20:20.920923 1340508 logs.go:282] 1 containers: [40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b]
	I1209 05:20:20.920985 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:20:20.924692 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:20:20.924763 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:20:20.950311 1340508 cri.go:89] found id: ""
	I1209 05:20:20.950339 1340508 logs.go:282] 0 containers: []
	W1209 05:20:20.950348 1340508 logs.go:284] No container was found matching "coredns"
	I1209 05:20:20.950359 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:20:20.950424 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:20:20.979068 1340508 cri.go:89] found id: "ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a"
	I1209 05:20:20.979091 1340508 cri.go:89] found id: ""
	I1209 05:20:20.979099 1340508 logs.go:282] 1 containers: [ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a]
	I1209 05:20:20.979156 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:20:20.982960 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:20:20.983031 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:20:21.008949 1340508 cri.go:89] found id: ""
	I1209 05:20:21.008973 1340508 logs.go:282] 0 containers: []
	W1209 05:20:21.008982 1340508 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:20:21.008988 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:20:21.009051 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:20:21.035825 1340508 cri.go:89] found id: "f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da"
	I1209 05:20:21.035851 1340508 cri.go:89] found id: ""
	I1209 05:20:21.035860 1340508 logs.go:282] 1 containers: [f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da]
	I1209 05:20:21.035932 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:20:21.039424 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:20:21.039491 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:20:21.064601 1340508 cri.go:89] found id: ""
	I1209 05:20:21.064626 1340508 logs.go:282] 0 containers: []
	W1209 05:20:21.064636 1340508 logs.go:284] No container was found matching "kindnet"
	I1209 05:20:21.064642 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:20:21.064704 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:20:21.089181 1340508 cri.go:89] found id: ""
	I1209 05:20:21.089247 1340508 logs.go:282] 0 containers: []
	W1209 05:20:21.089263 1340508 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:20:21.089278 1340508 logs.go:123] Gathering logs for kubelet ...
	I1209 05:20:21.089289 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:20:21.150105 1340508 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:20:21.150143 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:20:21.219301 1340508 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:20:21.219321 1340508 logs.go:123] Gathering logs for etcd [40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b] ...
	I1209 05:20:21.219334 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b"
	I1209 05:20:21.251111 1340508 logs.go:123] Gathering logs for kube-controller-manager [f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da] ...
	I1209 05:20:21.251143 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da"
	I1209 05:20:21.285109 1340508 logs.go:123] Gathering logs for containerd ...
	I1209 05:20:21.285140 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:20:21.314178 1340508 logs.go:123] Gathering logs for dmesg ...
	I1209 05:20:21.314217 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:20:21.331343 1340508 logs.go:123] Gathering logs for kube-apiserver [7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547] ...
	I1209 05:20:21.331373 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547"
	I1209 05:20:21.365266 1340508 logs.go:123] Gathering logs for kube-scheduler [ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a] ...
	I1209 05:20:21.365302 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a"
	I1209 05:20:21.397604 1340508 logs.go:123] Gathering logs for container status ...
	I1209 05:20:21.397639 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:20:23.927161 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:20:23.940350 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:20:23.940444 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:20:23.979373 1340508 cri.go:89] found id: "7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547"
	I1209 05:20:23.979406 1340508 cri.go:89] found id: ""
	I1209 05:20:23.979415 1340508 logs.go:282] 1 containers: [7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547]
	I1209 05:20:23.979482 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:20:23.983614 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:20:23.983695 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:20:24.011293 1340508 cri.go:89] found id: "40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b"
	I1209 05:20:24.011320 1340508 cri.go:89] found id: ""
	I1209 05:20:24.011339 1340508 logs.go:282] 1 containers: [40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b]
	I1209 05:20:24.011433 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:20:24.015586 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:20:24.015707 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:20:24.046931 1340508 cri.go:89] found id: ""
	I1209 05:20:24.046966 1340508 logs.go:282] 0 containers: []
	W1209 05:20:24.046975 1340508 logs.go:284] No container was found matching "coredns"
	I1209 05:20:24.046983 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:20:24.047069 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:20:24.073423 1340508 cri.go:89] found id: "ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a"
	I1209 05:20:24.073492 1340508 cri.go:89] found id: ""
	I1209 05:20:24.073508 1340508 logs.go:282] 1 containers: [ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a]
	I1209 05:20:24.073584 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:20:24.077532 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:20:24.077615 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:20:24.103289 1340508 cri.go:89] found id: ""
	I1209 05:20:24.103313 1340508 logs.go:282] 0 containers: []
	W1209 05:20:24.103322 1340508 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:20:24.103328 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:20:24.103396 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:20:24.128114 1340508 cri.go:89] found id: "f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da"
	I1209 05:20:24.128134 1340508 cri.go:89] found id: ""
	I1209 05:20:24.128142 1340508 logs.go:282] 1 containers: [f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da]
	I1209 05:20:24.128199 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:20:24.131719 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:20:24.131793 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:20:24.156803 1340508 cri.go:89] found id: ""
	I1209 05:20:24.156826 1340508 logs.go:282] 0 containers: []
	W1209 05:20:24.156834 1340508 logs.go:284] No container was found matching "kindnet"
	I1209 05:20:24.156840 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:20:24.156908 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:20:24.185636 1340508 cri.go:89] found id: ""
	I1209 05:20:24.185712 1340508 logs.go:282] 0 containers: []
	W1209 05:20:24.185735 1340508 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:20:24.185765 1340508 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:20:24.185790 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:20:24.249661 1340508 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:20:24.249687 1340508 logs.go:123] Gathering logs for kube-apiserver [7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547] ...
	I1209 05:20:24.249700 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547"
	I1209 05:20:24.282229 1340508 logs.go:123] Gathering logs for etcd [40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b] ...
	I1209 05:20:24.282259 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b"
	I1209 05:20:24.315832 1340508 logs.go:123] Gathering logs for kube-scheduler [ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a] ...
	I1209 05:20:24.315863 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a"
	I1209 05:20:24.347123 1340508 logs.go:123] Gathering logs for containerd ...
	I1209 05:20:24.347154 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:20:24.379877 1340508 logs.go:123] Gathering logs for kubelet ...
	I1209 05:20:24.379918 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:20:24.439520 1340508 logs.go:123] Gathering logs for dmesg ...
	I1209 05:20:24.439558 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:20:24.455156 1340508 logs.go:123] Gathering logs for kube-controller-manager [f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da] ...
	I1209 05:20:24.455185 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da"
	I1209 05:20:24.484213 1340508 logs.go:123] Gathering logs for container status ...
	I1209 05:20:24.484241 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:20:27.017320 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:20:27.028117 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:20:27.028194 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:20:27.055007 1340508 cri.go:89] found id: "7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547"
	I1209 05:20:27.055028 1340508 cri.go:89] found id: ""
	I1209 05:20:27.055036 1340508 logs.go:282] 1 containers: [7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547]
	I1209 05:20:27.055099 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:20:27.058672 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:20:27.058748 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:20:27.086293 1340508 cri.go:89] found id: "40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b"
	I1209 05:20:27.086314 1340508 cri.go:89] found id: ""
	I1209 05:20:27.086322 1340508 logs.go:282] 1 containers: [40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b]
	I1209 05:20:27.086377 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:20:27.090016 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:20:27.090095 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:20:27.114636 1340508 cri.go:89] found id: ""
	I1209 05:20:27.114661 1340508 logs.go:282] 0 containers: []
	W1209 05:20:27.114676 1340508 logs.go:284] No container was found matching "coredns"
	I1209 05:20:27.114684 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:20:27.114741 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:20:27.144303 1340508 cri.go:89] found id: "ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a"
	I1209 05:20:27.144327 1340508 cri.go:89] found id: ""
	I1209 05:20:27.144335 1340508 logs.go:282] 1 containers: [ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a]
	I1209 05:20:27.144403 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:20:27.148055 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:20:27.148137 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:20:27.173248 1340508 cri.go:89] found id: ""
	I1209 05:20:27.173273 1340508 logs.go:282] 0 containers: []
	W1209 05:20:27.173282 1340508 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:20:27.173295 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:20:27.173360 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:20:27.203221 1340508 cri.go:89] found id: "f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da"
	I1209 05:20:27.203245 1340508 cri.go:89] found id: ""
	I1209 05:20:27.203254 1340508 logs.go:282] 1 containers: [f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da]
	I1209 05:20:27.203315 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:20:27.207094 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:20:27.207188 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:20:27.232140 1340508 cri.go:89] found id: ""
	I1209 05:20:27.232166 1340508 logs.go:282] 0 containers: []
	W1209 05:20:27.232175 1340508 logs.go:284] No container was found matching "kindnet"
	I1209 05:20:27.232182 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:20:27.232242 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:20:27.261485 1340508 cri.go:89] found id: ""
	I1209 05:20:27.261508 1340508 logs.go:282] 0 containers: []
	W1209 05:20:27.261516 1340508 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:20:27.261531 1340508 logs.go:123] Gathering logs for kubelet ...
	I1209 05:20:27.261542 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:20:27.319734 1340508 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:20:27.319770 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:20:27.382248 1340508 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:20:27.382269 1340508 logs.go:123] Gathering logs for kube-apiserver [7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547] ...
	I1209 05:20:27.382284 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547"
	I1209 05:20:27.426013 1340508 logs.go:123] Gathering logs for kube-scheduler [ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a] ...
	I1209 05:20:27.426046 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a"
	I1209 05:20:27.458621 1340508 logs.go:123] Gathering logs for containerd ...
	I1209 05:20:27.458651 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:20:27.488930 1340508 logs.go:123] Gathering logs for container status ...
	I1209 05:20:27.488966 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:20:27.521489 1340508 logs.go:123] Gathering logs for dmesg ...
	I1209 05:20:27.521518 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:20:27.538084 1340508 logs.go:123] Gathering logs for etcd [40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b] ...
	I1209 05:20:27.538113 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b"
	I1209 05:20:27.573308 1340508 logs.go:123] Gathering logs for kube-controller-manager [f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da] ...
	I1209 05:20:27.573342 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da"
	I1209 05:20:30.103467 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:20:30.114563 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:20:30.114641 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:20:30.143932 1340508 cri.go:89] found id: "7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547"
	I1209 05:20:30.143952 1340508 cri.go:89] found id: ""
	I1209 05:20:30.143961 1340508 logs.go:282] 1 containers: [7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547]
	I1209 05:20:30.144062 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:20:30.147949 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:20:30.148055 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:20:30.173632 1340508 cri.go:89] found id: "40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b"
	I1209 05:20:30.173654 1340508 cri.go:89] found id: ""
	I1209 05:20:30.173663 1340508 logs.go:282] 1 containers: [40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b]
	I1209 05:20:30.173740 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:20:30.177789 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:20:30.177866 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:20:30.203728 1340508 cri.go:89] found id: ""
	I1209 05:20:30.203753 1340508 logs.go:282] 0 containers: []
	W1209 05:20:30.203762 1340508 logs.go:284] No container was found matching "coredns"
	I1209 05:20:30.203768 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:20:30.203830 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:20:30.230312 1340508 cri.go:89] found id: "ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a"
	I1209 05:20:30.230337 1340508 cri.go:89] found id: ""
	I1209 05:20:30.230346 1340508 logs.go:282] 1 containers: [ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a]
	I1209 05:20:30.230404 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:20:30.234191 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:20:30.234268 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:20:30.259629 1340508 cri.go:89] found id: ""
	I1209 05:20:30.259654 1340508 logs.go:282] 0 containers: []
	W1209 05:20:30.259662 1340508 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:20:30.259669 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:20:30.259728 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:20:30.286211 1340508 cri.go:89] found id: "f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da"
	I1209 05:20:30.286234 1340508 cri.go:89] found id: ""
	I1209 05:20:30.286242 1340508 logs.go:282] 1 containers: [f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da]
	I1209 05:20:30.286300 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:20:30.290101 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:20:30.290231 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:20:30.314694 1340508 cri.go:89] found id: ""
	I1209 05:20:30.314720 1340508 logs.go:282] 0 containers: []
	W1209 05:20:30.314729 1340508 logs.go:284] No container was found matching "kindnet"
	I1209 05:20:30.314735 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:20:30.314825 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:20:30.338870 1340508 cri.go:89] found id: ""
	I1209 05:20:30.338895 1340508 logs.go:282] 0 containers: []
	W1209 05:20:30.338903 1340508 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:20:30.338918 1340508 logs.go:123] Gathering logs for kubelet ...
	I1209 05:20:30.338948 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:20:30.396090 1340508 logs.go:123] Gathering logs for dmesg ...
	I1209 05:20:30.396128 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:20:30.412226 1340508 logs.go:123] Gathering logs for kube-controller-manager [f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da] ...
	I1209 05:20:30.412254 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da"
	I1209 05:20:30.439685 1340508 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:20:30.439713 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:20:30.507621 1340508 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:20:30.507642 1340508 logs.go:123] Gathering logs for kube-apiserver [7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547] ...
	I1209 05:20:30.507656 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547"
	I1209 05:20:30.542162 1340508 logs.go:123] Gathering logs for etcd [40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b] ...
	I1209 05:20:30.542197 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b"
	I1209 05:20:30.573366 1340508 logs.go:123] Gathering logs for kube-scheduler [ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a] ...
	I1209 05:20:30.573398 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a"
	I1209 05:20:30.609453 1340508 logs.go:123] Gathering logs for containerd ...
	I1209 05:20:30.609484 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:20:30.637874 1340508 logs.go:123] Gathering logs for container status ...
	I1209 05:20:30.637948 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:20:33.170965 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:20:33.180917 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:20:33.181028 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:20:33.209882 1340508 cri.go:89] found id: "7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547"
	I1209 05:20:33.209951 1340508 cri.go:89] found id: ""
	I1209 05:20:33.209966 1340508 logs.go:282] 1 containers: [7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547]
	I1209 05:20:33.210030 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:20:33.213723 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:20:33.213811 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:20:33.238476 1340508 cri.go:89] found id: "40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b"
	I1209 05:20:33.238498 1340508 cri.go:89] found id: ""
	I1209 05:20:33.238506 1340508 logs.go:282] 1 containers: [40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b]
	I1209 05:20:33.238589 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:20:33.242346 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:20:33.242419 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:20:33.266133 1340508 cri.go:89] found id: ""
	I1209 05:20:33.266159 1340508 logs.go:282] 0 containers: []
	W1209 05:20:33.266168 1340508 logs.go:284] No container was found matching "coredns"
	I1209 05:20:33.266174 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:20:33.266234 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:20:33.291595 1340508 cri.go:89] found id: "ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a"
	I1209 05:20:33.291618 1340508 cri.go:89] found id: ""
	I1209 05:20:33.291626 1340508 logs.go:282] 1 containers: [ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a]
	I1209 05:20:33.291681 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:20:33.295284 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:20:33.295357 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:20:33.319266 1340508 cri.go:89] found id: ""
	I1209 05:20:33.319292 1340508 logs.go:282] 0 containers: []
	W1209 05:20:33.319300 1340508 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:20:33.319309 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:20:33.319375 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:20:33.343498 1340508 cri.go:89] found id: "f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da"
	I1209 05:20:33.343520 1340508 cri.go:89] found id: ""
	I1209 05:20:33.343529 1340508 logs.go:282] 1 containers: [f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da]
	I1209 05:20:33.343582 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:20:33.347058 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:20:33.347127 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:20:33.374748 1340508 cri.go:89] found id: ""
	I1209 05:20:33.374774 1340508 logs.go:282] 0 containers: []
	W1209 05:20:33.374784 1340508 logs.go:284] No container was found matching "kindnet"
	I1209 05:20:33.374790 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:20:33.374854 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:20:33.398604 1340508 cri.go:89] found id: ""
	I1209 05:20:33.398630 1340508 logs.go:282] 0 containers: []
	W1209 05:20:33.398639 1340508 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:20:33.398652 1340508 logs.go:123] Gathering logs for dmesg ...
	I1209 05:20:33.398666 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:20:33.414767 1340508 logs.go:123] Gathering logs for kube-apiserver [7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547] ...
	I1209 05:20:33.414792 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547"
	I1209 05:20:33.464975 1340508 logs.go:123] Gathering logs for etcd [40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b] ...
	I1209 05:20:33.465053 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b"
	I1209 05:20:33.495908 1340508 logs.go:123] Gathering logs for kube-controller-manager [f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da] ...
	I1209 05:20:33.495938 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da"
	I1209 05:20:33.524162 1340508 logs.go:123] Gathering logs for container status ...
	I1209 05:20:33.524231 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:20:33.553420 1340508 logs.go:123] Gathering logs for kubelet ...
	I1209 05:20:33.553446 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:20:33.613075 1340508 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:20:33.613109 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:20:33.680411 1340508 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:20:33.680481 1340508 logs.go:123] Gathering logs for kube-scheduler [ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a] ...
	I1209 05:20:33.680522 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a"
	I1209 05:20:33.715172 1340508 logs.go:123] Gathering logs for containerd ...
	I1209 05:20:33.715244 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:20:36.247281 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:20:36.257480 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:20:36.257553 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:20:36.288827 1340508 cri.go:89] found id: "7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547"
	I1209 05:20:36.288846 1340508 cri.go:89] found id: ""
	I1209 05:20:36.288855 1340508 logs.go:282] 1 containers: [7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547]
	I1209 05:20:36.288910 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:20:36.292536 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:20:36.292621 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:20:36.317423 1340508 cri.go:89] found id: "40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b"
	I1209 05:20:36.317444 1340508 cri.go:89] found id: ""
	I1209 05:20:36.317452 1340508 logs.go:282] 1 containers: [40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b]
	I1209 05:20:36.317508 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:20:36.321167 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:20:36.321238 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:20:36.345421 1340508 cri.go:89] found id: ""
	I1209 05:20:36.345447 1340508 logs.go:282] 0 containers: []
	W1209 05:20:36.345455 1340508 logs.go:284] No container was found matching "coredns"
	I1209 05:20:36.345462 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:20:36.345519 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:20:36.369170 1340508 cri.go:89] found id: "ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a"
	I1209 05:20:36.369193 1340508 cri.go:89] found id: ""
	I1209 05:20:36.369201 1340508 logs.go:282] 1 containers: [ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a]
	I1209 05:20:36.369258 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:20:36.373594 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:20:36.373670 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:20:36.396951 1340508 cri.go:89] found id: ""
	I1209 05:20:36.396977 1340508 logs.go:282] 0 containers: []
	W1209 05:20:36.396988 1340508 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:20:36.396994 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:20:36.397055 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:20:36.425623 1340508 cri.go:89] found id: "f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da"
	I1209 05:20:36.425655 1340508 cri.go:89] found id: ""
	I1209 05:20:36.425664 1340508 logs.go:282] 1 containers: [f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da]
	I1209 05:20:36.425733 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:20:36.429379 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:20:36.429460 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:20:36.451755 1340508 cri.go:89] found id: ""
	I1209 05:20:36.451831 1340508 logs.go:282] 0 containers: []
	W1209 05:20:36.451855 1340508 logs.go:284] No container was found matching "kindnet"
	I1209 05:20:36.451873 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:20:36.451965 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:20:36.476468 1340508 cri.go:89] found id: ""
	I1209 05:20:36.476494 1340508 logs.go:282] 0 containers: []
	W1209 05:20:36.476503 1340508 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:20:36.476517 1340508 logs.go:123] Gathering logs for kubelet ...
	I1209 05:20:36.476528 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:20:36.534437 1340508 logs.go:123] Gathering logs for dmesg ...
	I1209 05:20:36.534470 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:20:36.550398 1340508 logs.go:123] Gathering logs for etcd [40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b] ...
	I1209 05:20:36.550425 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b"
	I1209 05:20:36.584212 1340508 logs.go:123] Gathering logs for container status ...
	I1209 05:20:36.584244 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:20:36.616447 1340508 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:20:36.616476 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:20:36.677428 1340508 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:20:36.677447 1340508 logs.go:123] Gathering logs for kube-apiserver [7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547] ...
	I1209 05:20:36.677460 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547"
	I1209 05:20:36.717469 1340508 logs.go:123] Gathering logs for kube-scheduler [ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a] ...
	I1209 05:20:36.717544 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a"
	I1209 05:20:36.756601 1340508 logs.go:123] Gathering logs for kube-controller-manager [f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da] ...
	I1209 05:20:36.756632 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da"
	I1209 05:20:36.786536 1340508 logs.go:123] Gathering logs for containerd ...
	I1209 05:20:36.786568 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:20:39.315373 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:20:39.324820 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:20:39.324897 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:20:39.348974 1340508 cri.go:89] found id: "7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547"
	I1209 05:20:39.348996 1340508 cri.go:89] found id: ""
	I1209 05:20:39.349004 1340508 logs.go:282] 1 containers: [7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547]
	I1209 05:20:39.349059 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:20:39.352699 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:20:39.352770 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:20:39.377963 1340508 cri.go:89] found id: "40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b"
	I1209 05:20:39.377985 1340508 cri.go:89] found id: ""
	I1209 05:20:39.377994 1340508 logs.go:282] 1 containers: [40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b]
	I1209 05:20:39.378057 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:20:39.381722 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:20:39.381803 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:20:39.411226 1340508 cri.go:89] found id: ""
	I1209 05:20:39.411251 1340508 logs.go:282] 0 containers: []
	W1209 05:20:39.411260 1340508 logs.go:284] No container was found matching "coredns"
	I1209 05:20:39.411265 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:20:39.411325 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:20:39.435378 1340508 cri.go:89] found id: "ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a"
	I1209 05:20:39.435397 1340508 cri.go:89] found id: ""
	I1209 05:20:39.435405 1340508 logs.go:282] 1 containers: [ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a]
	I1209 05:20:39.435459 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:20:39.438880 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:20:39.438944 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:20:39.462345 1340508 cri.go:89] found id: ""
	I1209 05:20:39.462370 1340508 logs.go:282] 0 containers: []
	W1209 05:20:39.462381 1340508 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:20:39.462387 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:20:39.462447 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:20:39.487412 1340508 cri.go:89] found id: "f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da"
	I1209 05:20:39.487434 1340508 cri.go:89] found id: ""
	I1209 05:20:39.487442 1340508 logs.go:282] 1 containers: [f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da]
	I1209 05:20:39.487521 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:20:39.491004 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:20:39.491095 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:20:39.518252 1340508 cri.go:89] found id: ""
	I1209 05:20:39.518276 1340508 logs.go:282] 0 containers: []
	W1209 05:20:39.518284 1340508 logs.go:284] No container was found matching "kindnet"
	I1209 05:20:39.518290 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:20:39.518354 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:20:39.547107 1340508 cri.go:89] found id: ""
	I1209 05:20:39.547132 1340508 logs.go:282] 0 containers: []
	W1209 05:20:39.547141 1340508 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:20:39.547154 1340508 logs.go:123] Gathering logs for container status ...
	I1209 05:20:39.547182 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:20:39.582591 1340508 logs.go:123] Gathering logs for kubelet ...
	I1209 05:20:39.582618 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:20:39.643306 1340508 logs.go:123] Gathering logs for kube-apiserver [7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547] ...
	I1209 05:20:39.643341 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547"
	I1209 05:20:39.693799 1340508 logs.go:123] Gathering logs for kube-controller-manager [f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da] ...
	I1209 05:20:39.693869 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da"
	I1209 05:20:39.731402 1340508 logs.go:123] Gathering logs for dmesg ...
	I1209 05:20:39.731473 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:20:39.748434 1340508 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:20:39.748505 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:20:39.816255 1340508 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:20:39.816319 1340508 logs.go:123] Gathering logs for etcd [40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b] ...
	I1209 05:20:39.816347 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b"
	I1209 05:20:39.847784 1340508 logs.go:123] Gathering logs for kube-scheduler [ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a] ...
	I1209 05:20:39.847815 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a"
	I1209 05:20:39.879625 1340508 logs.go:123] Gathering logs for containerd ...
	I1209 05:20:39.879658 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:20:42.410891 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:20:42.421778 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:20:42.421845 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:20:42.448727 1340508 cri.go:89] found id: "7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547"
	I1209 05:20:42.448750 1340508 cri.go:89] found id: ""
	I1209 05:20:42.448763 1340508 logs.go:282] 1 containers: [7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547]
	I1209 05:20:42.448823 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:20:42.452726 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:20:42.452797 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:20:42.479363 1340508 cri.go:89] found id: "40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b"
	I1209 05:20:42.479387 1340508 cri.go:89] found id: ""
	I1209 05:20:42.479396 1340508 logs.go:282] 1 containers: [40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b]
	I1209 05:20:42.479455 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:20:42.483136 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:20:42.483212 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:20:42.515279 1340508 cri.go:89] found id: ""
	I1209 05:20:42.515312 1340508 logs.go:282] 0 containers: []
	W1209 05:20:42.515321 1340508 logs.go:284] No container was found matching "coredns"
	I1209 05:20:42.515327 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:20:42.515394 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:20:42.551792 1340508 cri.go:89] found id: "ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a"
	I1209 05:20:42.551862 1340508 cri.go:89] found id: ""
	I1209 05:20:42.551885 1340508 logs.go:282] 1 containers: [ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a]
	I1209 05:20:42.551975 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:20:42.557373 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:20:42.557444 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:20:42.589798 1340508 cri.go:89] found id: ""
	I1209 05:20:42.589825 1340508 logs.go:282] 0 containers: []
	W1209 05:20:42.589835 1340508 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:20:42.589841 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:20:42.589904 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:20:42.629135 1340508 cri.go:89] found id: "f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da"
	I1209 05:20:42.629157 1340508 cri.go:89] found id: ""
	I1209 05:20:42.629166 1340508 logs.go:282] 1 containers: [f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da]
	I1209 05:20:42.629224 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:20:42.634868 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:20:42.634943 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:20:42.667308 1340508 cri.go:89] found id: ""
	I1209 05:20:42.667334 1340508 logs.go:282] 0 containers: []
	W1209 05:20:42.667342 1340508 logs.go:284] No container was found matching "kindnet"
	I1209 05:20:42.667348 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:20:42.667405 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:20:42.710576 1340508 cri.go:89] found id: ""
	I1209 05:20:42.710597 1340508 logs.go:282] 0 containers: []
	W1209 05:20:42.710605 1340508 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:20:42.710617 1340508 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:20:42.710629 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:20:42.834329 1340508 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:20:42.834401 1340508 logs.go:123] Gathering logs for kube-apiserver [7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547] ...
	I1209 05:20:42.834427 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547"
	I1209 05:20:42.874384 1340508 logs.go:123] Gathering logs for etcd [40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b] ...
	I1209 05:20:42.874458 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b"
	I1209 05:20:42.920922 1340508 logs.go:123] Gathering logs for kube-scheduler [ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a] ...
	I1209 05:20:42.920992 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a"
	I1209 05:20:42.963702 1340508 logs.go:123] Gathering logs for kube-controller-manager [f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da] ...
	I1209 05:20:42.963772 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da"
	I1209 05:20:43.002595 1340508 logs.go:123] Gathering logs for containerd ...
	I1209 05:20:43.002712 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:20:43.034229 1340508 logs.go:123] Gathering logs for container status ...
	I1209 05:20:43.034269 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:20:43.095238 1340508 logs.go:123] Gathering logs for kubelet ...
	I1209 05:20:43.095266 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:20:43.162377 1340508 logs.go:123] Gathering logs for dmesg ...
	I1209 05:20:43.162413 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:20:45.678868 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:20:45.690474 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:20:45.690546 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:20:45.717873 1340508 cri.go:89] found id: "7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547"
	I1209 05:20:45.717893 1340508 cri.go:89] found id: ""
	I1209 05:20:45.717902 1340508 logs.go:282] 1 containers: [7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547]
	I1209 05:20:45.717959 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:20:45.722127 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:20:45.722200 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:20:45.748694 1340508 cri.go:89] found id: "40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b"
	I1209 05:20:45.748713 1340508 cri.go:89] found id: ""
	I1209 05:20:45.748727 1340508 logs.go:282] 1 containers: [40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b]
	I1209 05:20:45.748782 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:20:45.753383 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:20:45.753451 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:20:45.777750 1340508 cri.go:89] found id: ""
	I1209 05:20:45.777773 1340508 logs.go:282] 0 containers: []
	W1209 05:20:45.777782 1340508 logs.go:284] No container was found matching "coredns"
	I1209 05:20:45.777788 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:20:45.777846 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:20:45.803339 1340508 cri.go:89] found id: "ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a"
	I1209 05:20:45.803358 1340508 cri.go:89] found id: ""
	I1209 05:20:45.803366 1340508 logs.go:282] 1 containers: [ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a]
	I1209 05:20:45.803426 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:20:45.807643 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:20:45.807772 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:20:45.832822 1340508 cri.go:89] found id: ""
	I1209 05:20:45.832901 1340508 logs.go:282] 0 containers: []
	W1209 05:20:45.832923 1340508 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:20:45.832938 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:20:45.833010 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:20:45.859250 1340508 cri.go:89] found id: "f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da"
	I1209 05:20:45.859273 1340508 cri.go:89] found id: ""
	I1209 05:20:45.859283 1340508 logs.go:282] 1 containers: [f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da]
	I1209 05:20:45.859345 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:20:45.863124 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:20:45.863195 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:20:45.888218 1340508 cri.go:89] found id: ""
	I1209 05:20:45.888282 1340508 logs.go:282] 0 containers: []
	W1209 05:20:45.888307 1340508 logs.go:284] No container was found matching "kindnet"
	I1209 05:20:45.888325 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:20:45.888627 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:20:45.913658 1340508 cri.go:89] found id: ""
	I1209 05:20:45.913681 1340508 logs.go:282] 0 containers: []
	W1209 05:20:45.913690 1340508 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:20:45.913703 1340508 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:20:45.913714 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:20:45.977222 1340508 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:20:45.977239 1340508 logs.go:123] Gathering logs for kube-apiserver [7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547] ...
	I1209 05:20:45.977252 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547"
	I1209 05:20:46.012067 1340508 logs.go:123] Gathering logs for kube-controller-manager [f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da] ...
	I1209 05:20:46.012106 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da"
	I1209 05:20:46.041332 1340508 logs.go:123] Gathering logs for containerd ...
	I1209 05:20:46.041363 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:20:46.071294 1340508 logs.go:123] Gathering logs for container status ...
	I1209 05:20:46.071334 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:20:46.101687 1340508 logs.go:123] Gathering logs for kubelet ...
	I1209 05:20:46.101714 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:20:46.159765 1340508 logs.go:123] Gathering logs for dmesg ...
	I1209 05:20:46.159803 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:20:46.179635 1340508 logs.go:123] Gathering logs for etcd [40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b] ...
	I1209 05:20:46.179673 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b"
	I1209 05:20:46.216534 1340508 logs.go:123] Gathering logs for kube-scheduler [ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a] ...
	I1209 05:20:46.216565 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a"
	I1209 05:20:48.748156 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:20:48.757817 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:20:48.757887 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:20:48.785979 1340508 cri.go:89] found id: "7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547"
	I1209 05:20:48.786001 1340508 cri.go:89] found id: ""
	I1209 05:20:48.786009 1340508 logs.go:282] 1 containers: [7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547]
	I1209 05:20:48.786065 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:20:48.789710 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:20:48.789781 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:20:48.815475 1340508 cri.go:89] found id: "40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b"
	I1209 05:20:48.815498 1340508 cri.go:89] found id: ""
	I1209 05:20:48.815506 1340508 logs.go:282] 1 containers: [40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b]
	I1209 05:20:48.815564 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:20:48.819216 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:20:48.819292 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:20:48.843669 1340508 cri.go:89] found id: ""
	I1209 05:20:48.843692 1340508 logs.go:282] 0 containers: []
	W1209 05:20:48.843700 1340508 logs.go:284] No container was found matching "coredns"
	I1209 05:20:48.843706 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:20:48.843764 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:20:48.874623 1340508 cri.go:89] found id: "ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a"
	I1209 05:20:48.874646 1340508 cri.go:89] found id: ""
	I1209 05:20:48.874654 1340508 logs.go:282] 1 containers: [ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a]
	I1209 05:20:48.874709 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:20:48.878297 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:20:48.878369 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:20:48.905050 1340508 cri.go:89] found id: ""
	I1209 05:20:48.905076 1340508 logs.go:282] 0 containers: []
	W1209 05:20:48.905084 1340508 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:20:48.905090 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:20:48.905150 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:20:48.930001 1340508 cri.go:89] found id: "f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da"
	I1209 05:20:48.930024 1340508 cri.go:89] found id: ""
	I1209 05:20:48.930032 1340508 logs.go:282] 1 containers: [f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da]
	I1209 05:20:48.930088 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:20:48.933825 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:20:48.933929 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:20:48.961883 1340508 cri.go:89] found id: ""
	I1209 05:20:48.961911 1340508 logs.go:282] 0 containers: []
	W1209 05:20:48.961920 1340508 logs.go:284] No container was found matching "kindnet"
	I1209 05:20:48.961926 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:20:48.961985 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:20:48.985295 1340508 cri.go:89] found id: ""
	I1209 05:20:48.985346 1340508 logs.go:282] 0 containers: []
	W1209 05:20:48.985371 1340508 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:20:48.985396 1340508 logs.go:123] Gathering logs for kubelet ...
	I1209 05:20:48.985418 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:20:49.042817 1340508 logs.go:123] Gathering logs for kube-controller-manager [f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da] ...
	I1209 05:20:49.042852 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da"
	I1209 05:20:49.072423 1340508 logs.go:123] Gathering logs for containerd ...
	I1209 05:20:49.072451 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:20:49.102050 1340508 logs.go:123] Gathering logs for dmesg ...
	I1209 05:20:49.102084 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:20:49.118841 1340508 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:20:49.118868 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:20:49.182391 1340508 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:20:49.182409 1340508 logs.go:123] Gathering logs for kube-apiserver [7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547] ...
	I1209 05:20:49.182422 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547"
	I1209 05:20:49.215739 1340508 logs.go:123] Gathering logs for etcd [40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b] ...
	I1209 05:20:49.215773 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b"
	I1209 05:20:49.247767 1340508 logs.go:123] Gathering logs for kube-scheduler [ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a] ...
	I1209 05:20:49.247797 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a"
	I1209 05:20:49.284990 1340508 logs.go:123] Gathering logs for container status ...
	I1209 05:20:49.285018 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:20:51.813274 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:20:51.823422 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:20:51.823503 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:20:51.848440 1340508 cri.go:89] found id: "7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547"
	I1209 05:20:51.848463 1340508 cri.go:89] found id: ""
	I1209 05:20:51.848471 1340508 logs.go:282] 1 containers: [7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547]
	I1209 05:20:51.848548 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:20:51.852350 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:20:51.852428 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:20:51.877097 1340508 cri.go:89] found id: "40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b"
	I1209 05:20:51.877120 1340508 cri.go:89] found id: ""
	I1209 05:20:51.877129 1340508 logs.go:282] 1 containers: [40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b]
	I1209 05:20:51.877203 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:20:51.880995 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:20:51.881066 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:20:51.905136 1340508 cri.go:89] found id: ""
	I1209 05:20:51.905164 1340508 logs.go:282] 0 containers: []
	W1209 05:20:51.905183 1340508 logs.go:284] No container was found matching "coredns"
	I1209 05:20:51.905196 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:20:51.905259 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:20:51.929315 1340508 cri.go:89] found id: "ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a"
	I1209 05:20:51.929336 1340508 cri.go:89] found id: ""
	I1209 05:20:51.929345 1340508 logs.go:282] 1 containers: [ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a]
	I1209 05:20:51.929474 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:20:51.933140 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:20:51.933214 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:20:51.960967 1340508 cri.go:89] found id: ""
	I1209 05:20:51.960990 1340508 logs.go:282] 0 containers: []
	W1209 05:20:51.961004 1340508 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:20:51.961010 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:20:51.961068 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:20:51.987076 1340508 cri.go:89] found id: "f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da"
	I1209 05:20:51.987098 1340508 cri.go:89] found id: ""
	I1209 05:20:51.987106 1340508 logs.go:282] 1 containers: [f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da]
	I1209 05:20:51.987160 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:20:51.990681 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:20:51.990748 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:20:52.017949 1340508 cri.go:89] found id: ""
	I1209 05:20:52.017977 1340508 logs.go:282] 0 containers: []
	W1209 05:20:52.017987 1340508 logs.go:284] No container was found matching "kindnet"
	I1209 05:20:52.017995 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:20:52.018081 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:20:52.043978 1340508 cri.go:89] found id: ""
	I1209 05:20:52.044003 1340508 logs.go:282] 0 containers: []
	W1209 05:20:52.044041 1340508 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:20:52.044061 1340508 logs.go:123] Gathering logs for kube-scheduler [ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a] ...
	I1209 05:20:52.044081 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a"
	I1209 05:20:52.080636 1340508 logs.go:123] Gathering logs for containerd ...
	I1209 05:20:52.080682 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:20:52.111663 1340508 logs.go:123] Gathering logs for kubelet ...
	I1209 05:20:52.111718 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:20:52.172995 1340508 logs.go:123] Gathering logs for dmesg ...
	I1209 05:20:52.173031 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:20:52.189121 1340508 logs.go:123] Gathering logs for etcd [40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b] ...
	I1209 05:20:52.189149 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b"
	I1209 05:20:52.224633 1340508 logs.go:123] Gathering logs for kube-controller-manager [f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da] ...
	I1209 05:20:52.224669 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da"
	I1209 05:20:52.256384 1340508 logs.go:123] Gathering logs for container status ...
	I1209 05:20:52.256413 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:20:52.285977 1340508 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:20:52.286004 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:20:52.355200 1340508 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:20:52.355223 1340508 logs.go:123] Gathering logs for kube-apiserver [7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547] ...
	I1209 05:20:52.355236 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547"
	I1209 05:20:54.887384 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:20:54.897392 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:20:54.897465 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:20:54.926681 1340508 cri.go:89] found id: "7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547"
	I1209 05:20:54.926700 1340508 cri.go:89] found id: ""
	I1209 05:20:54.926708 1340508 logs.go:282] 1 containers: [7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547]
	I1209 05:20:54.926769 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:20:54.930356 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:20:54.930430 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:20:54.953742 1340508 cri.go:89] found id: "40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b"
	I1209 05:20:54.953804 1340508 cri.go:89] found id: ""
	I1209 05:20:54.953827 1340508 logs.go:282] 1 containers: [40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b]
	I1209 05:20:54.953891 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:20:54.957582 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:20:54.957652 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:20:54.984134 1340508 cri.go:89] found id: ""
	I1209 05:20:54.984208 1340508 logs.go:282] 0 containers: []
	W1209 05:20:54.984224 1340508 logs.go:284] No container was found matching "coredns"
	I1209 05:20:54.984232 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:20:54.984292 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:20:55.021099 1340508 cri.go:89] found id: "ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a"
	I1209 05:20:55.021184 1340508 cri.go:89] found id: ""
	I1209 05:20:55.021206 1340508 logs.go:282] 1 containers: [ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a]
	I1209 05:20:55.021294 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:20:55.026424 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:20:55.026562 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:20:55.071711 1340508 cri.go:89] found id: ""
	I1209 05:20:55.071810 1340508 logs.go:282] 0 containers: []
	W1209 05:20:55.071835 1340508 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:20:55.071856 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:20:55.071985 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:20:55.176587 1340508 cri.go:89] found id: "f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da"
	I1209 05:20:55.176660 1340508 cri.go:89] found id: ""
	I1209 05:20:55.176683 1340508 logs.go:282] 1 containers: [f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da]
	I1209 05:20:55.176803 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:20:55.181459 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:20:55.181542 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:20:55.210810 1340508 cri.go:89] found id: ""
	I1209 05:20:55.210835 1340508 logs.go:282] 0 containers: []
	W1209 05:20:55.210851 1340508 logs.go:284] No container was found matching "kindnet"
	I1209 05:20:55.210857 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:20:55.210939 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:20:55.238554 1340508 cri.go:89] found id: ""
	I1209 05:20:55.238586 1340508 logs.go:282] 0 containers: []
	W1209 05:20:55.238595 1340508 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:20:55.238608 1340508 logs.go:123] Gathering logs for etcd [40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b] ...
	I1209 05:20:55.238626 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b"
	I1209 05:20:55.274592 1340508 logs.go:123] Gathering logs for kube-controller-manager [f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da] ...
	I1209 05:20:55.274667 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da"
	I1209 05:20:55.314855 1340508 logs.go:123] Gathering logs for containerd ...
	I1209 05:20:55.314929 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:20:55.344001 1340508 logs.go:123] Gathering logs for container status ...
	I1209 05:20:55.344048 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:20:55.376576 1340508 logs.go:123] Gathering logs for kubelet ...
	I1209 05:20:55.376605 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:20:55.436507 1340508 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:20:55.436542 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:20:55.512365 1340508 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:20:55.512383 1340508 logs.go:123] Gathering logs for kube-scheduler [ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a] ...
	I1209 05:20:55.512396 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a"
	I1209 05:20:55.544496 1340508 logs.go:123] Gathering logs for dmesg ...
	I1209 05:20:55.544527 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:20:55.560995 1340508 logs.go:123] Gathering logs for kube-apiserver [7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547] ...
	I1209 05:20:55.561027 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547"
	I1209 05:20:58.104431 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:20:58.114276 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:20:58.114348 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:20:58.138668 1340508 cri.go:89] found id: "7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547"
	I1209 05:20:58.138687 1340508 cri.go:89] found id: ""
	I1209 05:20:58.138695 1340508 logs.go:282] 1 containers: [7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547]
	I1209 05:20:58.138749 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:20:58.142545 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:20:58.142624 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:20:58.167721 1340508 cri.go:89] found id: "40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b"
	I1209 05:20:58.167742 1340508 cri.go:89] found id: ""
	I1209 05:20:58.167751 1340508 logs.go:282] 1 containers: [40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b]
	I1209 05:20:58.167809 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:20:58.171423 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:20:58.171514 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:20:58.195495 1340508 cri.go:89] found id: ""
	I1209 05:20:58.195518 1340508 logs.go:282] 0 containers: []
	W1209 05:20:58.195527 1340508 logs.go:284] No container was found matching "coredns"
	I1209 05:20:58.195533 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:20:58.195590 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:20:58.219652 1340508 cri.go:89] found id: "ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a"
	I1209 05:20:58.219720 1340508 cri.go:89] found id: ""
	I1209 05:20:58.219755 1340508 logs.go:282] 1 containers: [ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a]
	I1209 05:20:58.219844 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:20:58.223454 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:20:58.223532 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:20:58.246563 1340508 cri.go:89] found id: ""
	I1209 05:20:58.246595 1340508 logs.go:282] 0 containers: []
	W1209 05:20:58.246603 1340508 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:20:58.246610 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:20:58.246674 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:20:58.270930 1340508 cri.go:89] found id: "f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da"
	I1209 05:20:58.270953 1340508 cri.go:89] found id: ""
	I1209 05:20:58.270961 1340508 logs.go:282] 1 containers: [f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da]
	I1209 05:20:58.271017 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:20:58.274976 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:20:58.275072 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:20:58.299353 1340508 cri.go:89] found id: ""
	I1209 05:20:58.299378 1340508 logs.go:282] 0 containers: []
	W1209 05:20:58.299386 1340508 logs.go:284] No container was found matching "kindnet"
	I1209 05:20:58.299393 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:20:58.299497 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:20:58.324624 1340508 cri.go:89] found id: ""
	I1209 05:20:58.324699 1340508 logs.go:282] 0 containers: []
	W1209 05:20:58.324715 1340508 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:20:58.324731 1340508 logs.go:123] Gathering logs for container status ...
	I1209 05:20:58.324744 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:20:58.352945 1340508 logs.go:123] Gathering logs for dmesg ...
	I1209 05:20:58.352977 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:20:58.368561 1340508 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:20:58.368591 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:20:58.436141 1340508 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:20:58.436177 1340508 logs.go:123] Gathering logs for kube-apiserver [7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547] ...
	I1209 05:20:58.436191 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547"
	I1209 05:20:58.477061 1340508 logs.go:123] Gathering logs for etcd [40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b] ...
	I1209 05:20:58.477133 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b"
	I1209 05:20:58.518493 1340508 logs.go:123] Gathering logs for kubelet ...
	I1209 05:20:58.518523 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:20:58.577192 1340508 logs.go:123] Gathering logs for kube-scheduler [ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a] ...
	I1209 05:20:58.577227 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a"
	I1209 05:20:58.609759 1340508 logs.go:123] Gathering logs for kube-controller-manager [f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da] ...
	I1209 05:20:58.609790 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da"
	I1209 05:20:58.640323 1340508 logs.go:123] Gathering logs for containerd ...
	I1209 05:20:58.640353 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:21:01.171152 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:21:01.184661 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:21:01.184743 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:21:01.215205 1340508 cri.go:89] found id: "7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547"
	I1209 05:21:01.215228 1340508 cri.go:89] found id: ""
	I1209 05:21:01.215236 1340508 logs.go:282] 1 containers: [7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547]
	I1209 05:21:01.215307 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:21:01.221079 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:21:01.221164 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:21:01.250733 1340508 cri.go:89] found id: "40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b"
	I1209 05:21:01.250754 1340508 cri.go:89] found id: ""
	I1209 05:21:01.250762 1340508 logs.go:282] 1 containers: [40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b]
	I1209 05:21:01.250829 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:21:01.257559 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:21:01.257658 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:21:01.293053 1340508 cri.go:89] found id: ""
	I1209 05:21:01.293088 1340508 logs.go:282] 0 containers: []
	W1209 05:21:01.293098 1340508 logs.go:284] No container was found matching "coredns"
	I1209 05:21:01.293104 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:21:01.293173 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:21:01.320810 1340508 cri.go:89] found id: "ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a"
	I1209 05:21:01.320881 1340508 cri.go:89] found id: ""
	I1209 05:21:01.320906 1340508 logs.go:282] 1 containers: [ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a]
	I1209 05:21:01.320991 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:21:01.326670 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:21:01.326780 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:21:01.373031 1340508 cri.go:89] found id: ""
	I1209 05:21:01.373105 1340508 logs.go:282] 0 containers: []
	W1209 05:21:01.373126 1340508 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:21:01.373144 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:21:01.373235 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:21:01.432665 1340508 cri.go:89] found id: "f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da"
	I1209 05:21:01.432738 1340508 cri.go:89] found id: ""
	I1209 05:21:01.432763 1340508 logs.go:282] 1 containers: [f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da]
	I1209 05:21:01.432860 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:21:01.445258 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:21:01.445376 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:21:01.525287 1340508 cri.go:89] found id: ""
	I1209 05:21:01.525351 1340508 logs.go:282] 0 containers: []
	W1209 05:21:01.525371 1340508 logs.go:284] No container was found matching "kindnet"
	I1209 05:21:01.525388 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:21:01.525486 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:21:01.557549 1340508 cri.go:89] found id: ""
	I1209 05:21:01.557572 1340508 logs.go:282] 0 containers: []
	W1209 05:21:01.557580 1340508 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:21:01.557593 1340508 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:21:01.557603 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:21:01.634737 1340508 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:21:01.634755 1340508 logs.go:123] Gathering logs for kube-apiserver [7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547] ...
	I1209 05:21:01.634768 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547"
	I1209 05:21:01.686019 1340508 logs.go:123] Gathering logs for etcd [40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b] ...
	I1209 05:21:01.686091 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b"
	I1209 05:21:01.727038 1340508 logs.go:123] Gathering logs for kube-scheduler [ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a] ...
	I1209 05:21:01.727124 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a"
	I1209 05:21:01.762374 1340508 logs.go:123] Gathering logs for container status ...
	I1209 05:21:01.762445 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:21:01.806210 1340508 logs.go:123] Gathering logs for dmesg ...
	I1209 05:21:01.806234 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:21:01.823422 1340508 logs.go:123] Gathering logs for kube-controller-manager [f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da] ...
	I1209 05:21:01.823492 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da"
	I1209 05:21:01.859173 1340508 logs.go:123] Gathering logs for containerd ...
	I1209 05:21:01.859414 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:21:01.892357 1340508 logs.go:123] Gathering logs for kubelet ...
	I1209 05:21:01.892442 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:21:04.467762 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:21:04.478022 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:21:04.478093 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:21:04.510966 1340508 cri.go:89] found id: "7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547"
	I1209 05:21:04.510985 1340508 cri.go:89] found id: ""
	I1209 05:21:04.510993 1340508 logs.go:282] 1 containers: [7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547]
	I1209 05:21:04.511048 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:21:04.514580 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:21:04.514647 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:21:04.539825 1340508 cri.go:89] found id: "40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b"
	I1209 05:21:04.539844 1340508 cri.go:89] found id: ""
	I1209 05:21:04.539852 1340508 logs.go:282] 1 containers: [40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b]
	I1209 05:21:04.539907 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:21:04.543471 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:21:04.543549 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:21:04.569629 1340508 cri.go:89] found id: ""
	I1209 05:21:04.569653 1340508 logs.go:282] 0 containers: []
	W1209 05:21:04.569661 1340508 logs.go:284] No container was found matching "coredns"
	I1209 05:21:04.569667 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:21:04.569730 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:21:04.599723 1340508 cri.go:89] found id: "ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a"
	I1209 05:21:04.599747 1340508 cri.go:89] found id: ""
	I1209 05:21:04.599755 1340508 logs.go:282] 1 containers: [ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a]
	I1209 05:21:04.599815 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:21:04.603466 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:21:04.603532 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:21:04.629526 1340508 cri.go:89] found id: ""
	I1209 05:21:04.629549 1340508 logs.go:282] 0 containers: []
	W1209 05:21:04.629558 1340508 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:21:04.629564 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:21:04.629627 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:21:04.654003 1340508 cri.go:89] found id: "f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da"
	I1209 05:21:04.654025 1340508 cri.go:89] found id: ""
	I1209 05:21:04.654034 1340508 logs.go:282] 1 containers: [f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da]
	I1209 05:21:04.654112 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:21:04.657712 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:21:04.657791 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:21:04.682232 1340508 cri.go:89] found id: ""
	I1209 05:21:04.682255 1340508 logs.go:282] 0 containers: []
	W1209 05:21:04.682263 1340508 logs.go:284] No container was found matching "kindnet"
	I1209 05:21:04.682269 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:21:04.682372 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:21:04.707906 1340508 cri.go:89] found id: ""
	I1209 05:21:04.707931 1340508 logs.go:282] 0 containers: []
	W1209 05:21:04.707939 1340508 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:21:04.707955 1340508 logs.go:123] Gathering logs for kube-apiserver [7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547] ...
	I1209 05:21:04.707969 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547"
	I1209 05:21:04.740731 1340508 logs.go:123] Gathering logs for etcd [40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b] ...
	I1209 05:21:04.740759 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b"
	I1209 05:21:04.775587 1340508 logs.go:123] Gathering logs for kubelet ...
	I1209 05:21:04.775624 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:21:04.840168 1340508 logs.go:123] Gathering logs for dmesg ...
	I1209 05:21:04.840262 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:21:04.861926 1340508 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:21:04.862005 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:21:04.944583 1340508 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:21:04.944647 1340508 logs.go:123] Gathering logs for kube-scheduler [ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a] ...
	I1209 05:21:04.944676 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a"
	I1209 05:21:04.992448 1340508 logs.go:123] Gathering logs for kube-controller-manager [f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da] ...
	I1209 05:21:04.992479 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da"
	I1209 05:21:05.039396 1340508 logs.go:123] Gathering logs for containerd ...
	I1209 05:21:05.039425 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:21:05.070711 1340508 logs.go:123] Gathering logs for container status ...
	I1209 05:21:05.070746 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:21:07.621781 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:21:07.632281 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:21:07.632354 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:21:07.657397 1340508 cri.go:89] found id: "7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547"
	I1209 05:21:07.657420 1340508 cri.go:89] found id: ""
	I1209 05:21:07.657428 1340508 logs.go:282] 1 containers: [7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547]
	I1209 05:21:07.657486 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:21:07.661291 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:21:07.661371 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:21:07.686811 1340508 cri.go:89] found id: "40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b"
	I1209 05:21:07.686833 1340508 cri.go:89] found id: ""
	I1209 05:21:07.686847 1340508 logs.go:282] 1 containers: [40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b]
	I1209 05:21:07.686904 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:21:07.690612 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:21:07.690690 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:21:07.716718 1340508 cri.go:89] found id: ""
	I1209 05:21:07.716744 1340508 logs.go:282] 0 containers: []
	W1209 05:21:07.716753 1340508 logs.go:284] No container was found matching "coredns"
	I1209 05:21:07.716760 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:21:07.716822 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:21:07.742427 1340508 cri.go:89] found id: "ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a"
	I1209 05:21:07.742452 1340508 cri.go:89] found id: ""
	I1209 05:21:07.742460 1340508 logs.go:282] 1 containers: [ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a]
	I1209 05:21:07.742516 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:21:07.746108 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:21:07.746182 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:21:07.777028 1340508 cri.go:89] found id: ""
	I1209 05:21:07.777052 1340508 logs.go:282] 0 containers: []
	W1209 05:21:07.777060 1340508 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:21:07.777067 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:21:07.777125 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:21:07.805680 1340508 cri.go:89] found id: "f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da"
	I1209 05:21:07.805702 1340508 cri.go:89] found id: ""
	I1209 05:21:07.805713 1340508 logs.go:282] 1 containers: [f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da]
	I1209 05:21:07.805800 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:21:07.809765 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:21:07.809833 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:21:07.833460 1340508 cri.go:89] found id: ""
	I1209 05:21:07.833483 1340508 logs.go:282] 0 containers: []
	W1209 05:21:07.833492 1340508 logs.go:284] No container was found matching "kindnet"
	I1209 05:21:07.833498 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:21:07.833557 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:21:07.862404 1340508 cri.go:89] found id: ""
	I1209 05:21:07.862427 1340508 logs.go:282] 0 containers: []
	W1209 05:21:07.862436 1340508 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:21:07.862450 1340508 logs.go:123] Gathering logs for kubelet ...
	I1209 05:21:07.862461 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:21:07.920156 1340508 logs.go:123] Gathering logs for dmesg ...
	I1209 05:21:07.920192 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:21:07.936721 1340508 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:21:07.936751 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:21:08.013181 1340508 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:21:08.013205 1340508 logs.go:123] Gathering logs for kube-scheduler [ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a] ...
	I1209 05:21:08.013221 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a"
	I1209 05:21:08.050313 1340508 logs.go:123] Gathering logs for kube-controller-manager [f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da] ...
	I1209 05:21:08.050344 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da"
	I1209 05:21:08.080487 1340508 logs.go:123] Gathering logs for containerd ...
	I1209 05:21:08.080517 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:21:08.111807 1340508 logs.go:123] Gathering logs for container status ...
	I1209 05:21:08.111841 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:21:08.140169 1340508 logs.go:123] Gathering logs for kube-apiserver [7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547] ...
	I1209 05:21:08.140198 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547"
	I1209 05:21:08.177025 1340508 logs.go:123] Gathering logs for etcd [40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b] ...
	I1209 05:21:08.177060 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b"
	I1209 05:21:10.710877 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:21:10.721782 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:21:10.721856 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:21:10.748436 1340508 cri.go:89] found id: "7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547"
	I1209 05:21:10.748455 1340508 cri.go:89] found id: ""
	I1209 05:21:10.748464 1340508 logs.go:282] 1 containers: [7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547]
	I1209 05:21:10.748523 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:21:10.752242 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:21:10.752310 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:21:10.776545 1340508 cri.go:89] found id: "40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b"
	I1209 05:21:10.776569 1340508 cri.go:89] found id: ""
	I1209 05:21:10.776577 1340508 logs.go:282] 1 containers: [40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b]
	I1209 05:21:10.776638 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:21:10.780117 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:21:10.780195 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:21:10.804356 1340508 cri.go:89] found id: ""
	I1209 05:21:10.804382 1340508 logs.go:282] 0 containers: []
	W1209 05:21:10.804391 1340508 logs.go:284] No container was found matching "coredns"
	I1209 05:21:10.804397 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:21:10.804541 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:21:10.828652 1340508 cri.go:89] found id: "ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a"
	I1209 05:21:10.828674 1340508 cri.go:89] found id: ""
	I1209 05:21:10.828682 1340508 logs.go:282] 1 containers: [ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a]
	I1209 05:21:10.828738 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:21:10.832398 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:21:10.832476 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:21:10.856679 1340508 cri.go:89] found id: ""
	I1209 05:21:10.856702 1340508 logs.go:282] 0 containers: []
	W1209 05:21:10.856710 1340508 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:21:10.856716 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:21:10.856775 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:21:10.881631 1340508 cri.go:89] found id: "f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da"
	I1209 05:21:10.881654 1340508 cri.go:89] found id: ""
	I1209 05:21:10.881663 1340508 logs.go:282] 1 containers: [f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da]
	I1209 05:21:10.881720 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:21:10.885451 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:21:10.885527 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:21:10.912852 1340508 cri.go:89] found id: ""
	I1209 05:21:10.912877 1340508 logs.go:282] 0 containers: []
	W1209 05:21:10.912886 1340508 logs.go:284] No container was found matching "kindnet"
	I1209 05:21:10.912893 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:21:10.912953 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:21:10.937076 1340508 cri.go:89] found id: ""
	I1209 05:21:10.937098 1340508 logs.go:282] 0 containers: []
	W1209 05:21:10.937106 1340508 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:21:10.937118 1340508 logs.go:123] Gathering logs for kube-scheduler [ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a] ...
	I1209 05:21:10.937135 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a"
	I1209 05:21:10.970623 1340508 logs.go:123] Gathering logs for kubelet ...
	I1209 05:21:10.970656 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:21:11.028226 1340508 logs.go:123] Gathering logs for dmesg ...
	I1209 05:21:11.028262 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:21:11.044502 1340508 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:21:11.044528 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:21:11.111528 1340508 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:21:11.111548 1340508 logs.go:123] Gathering logs for etcd [40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b] ...
	I1209 05:21:11.111561 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b"
	I1209 05:21:11.147146 1340508 logs.go:123] Gathering logs for kube-controller-manager [f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da] ...
	I1209 05:21:11.147179 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da"
	I1209 05:21:11.177353 1340508 logs.go:123] Gathering logs for containerd ...
	I1209 05:21:11.177384 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:21:11.212160 1340508 logs.go:123] Gathering logs for container status ...
	I1209 05:21:11.212190 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:21:11.244942 1340508 logs.go:123] Gathering logs for kube-apiserver [7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547] ...
	I1209 05:21:11.244967 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547"
	I1209 05:21:13.779006 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:21:13.789035 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:21:13.789104 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:21:13.813786 1340508 cri.go:89] found id: "7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547"
	I1209 05:21:13.813805 1340508 cri.go:89] found id: ""
	I1209 05:21:13.813813 1340508 logs.go:282] 1 containers: [7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547]
	I1209 05:21:13.813868 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:21:13.817880 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:21:13.817967 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:21:13.845004 1340508 cri.go:89] found id: "40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b"
	I1209 05:21:13.845032 1340508 cri.go:89] found id: ""
	I1209 05:21:13.845041 1340508 logs.go:282] 1 containers: [40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b]
	I1209 05:21:13.845096 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:21:13.848628 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:21:13.848695 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:21:13.874175 1340508 cri.go:89] found id: ""
	I1209 05:21:13.874198 1340508 logs.go:282] 0 containers: []
	W1209 05:21:13.874206 1340508 logs.go:284] No container was found matching "coredns"
	I1209 05:21:13.874212 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:21:13.874276 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:21:13.903052 1340508 cri.go:89] found id: "ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a"
	I1209 05:21:13.903074 1340508 cri.go:89] found id: ""
	I1209 05:21:13.903088 1340508 logs.go:282] 1 containers: [ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a]
	I1209 05:21:13.903144 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:21:13.906952 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:21:13.907024 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:21:13.931458 1340508 cri.go:89] found id: ""
	I1209 05:21:13.931483 1340508 logs.go:282] 0 containers: []
	W1209 05:21:13.931491 1340508 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:21:13.931497 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:21:13.931557 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:21:13.957012 1340508 cri.go:89] found id: "f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da"
	I1209 05:21:13.957036 1340508 cri.go:89] found id: ""
	I1209 05:21:13.957045 1340508 logs.go:282] 1 containers: [f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da]
	I1209 05:21:13.957104 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:21:13.960845 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:21:13.960917 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:21:13.990206 1340508 cri.go:89] found id: ""
	I1209 05:21:13.990228 1340508 logs.go:282] 0 containers: []
	W1209 05:21:13.990238 1340508 logs.go:284] No container was found matching "kindnet"
	I1209 05:21:13.990245 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:21:13.990304 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:21:14.015337 1340508 cri.go:89] found id: ""
	I1209 05:21:14.015364 1340508 logs.go:282] 0 containers: []
	W1209 05:21:14.015373 1340508 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:21:14.015390 1340508 logs.go:123] Gathering logs for kubelet ...
	I1209 05:21:14.015402 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:21:14.073439 1340508 logs.go:123] Gathering logs for dmesg ...
	I1209 05:21:14.073474 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:21:14.090785 1340508 logs.go:123] Gathering logs for etcd [40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b] ...
	I1209 05:21:14.090815 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b"
	I1209 05:21:14.132038 1340508 logs.go:123] Gathering logs for kube-scheduler [ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a] ...
	I1209 05:21:14.132069 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a"
	I1209 05:21:14.168337 1340508 logs.go:123] Gathering logs for kube-controller-manager [f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da] ...
	I1209 05:21:14.168371 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da"
	I1209 05:21:14.231262 1340508 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:21:14.231291 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:21:14.311162 1340508 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:21:14.311183 1340508 logs.go:123] Gathering logs for kube-apiserver [7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547] ...
	I1209 05:21:14.311204 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547"
	I1209 05:21:14.348683 1340508 logs.go:123] Gathering logs for containerd ...
	I1209 05:21:14.348715 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:21:14.377365 1340508 logs.go:123] Gathering logs for container status ...
	I1209 05:21:14.377403 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:21:16.907316 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:21:16.917175 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:21:16.917240 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:21:16.941252 1340508 cri.go:89] found id: "7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547"
	I1209 05:21:16.941276 1340508 cri.go:89] found id: ""
	I1209 05:21:16.941285 1340508 logs.go:282] 1 containers: [7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547]
	I1209 05:21:16.941340 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:21:16.944937 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:21:16.945018 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:21:16.969323 1340508 cri.go:89] found id: "40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b"
	I1209 05:21:16.969346 1340508 cri.go:89] found id: ""
	I1209 05:21:16.969355 1340508 logs.go:282] 1 containers: [40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b]
	I1209 05:21:16.969412 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:21:16.973091 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:21:16.973167 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:21:17.000308 1340508 cri.go:89] found id: ""
	I1209 05:21:17.000337 1340508 logs.go:282] 0 containers: []
	W1209 05:21:17.000345 1340508 logs.go:284] No container was found matching "coredns"
	I1209 05:21:17.000352 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:21:17.000415 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:21:17.027406 1340508 cri.go:89] found id: "ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a"
	I1209 05:21:17.027431 1340508 cri.go:89] found id: ""
	I1209 05:21:17.027439 1340508 logs.go:282] 1 containers: [ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a]
	I1209 05:21:17.027494 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:21:17.031164 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:21:17.031265 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:21:17.058670 1340508 cri.go:89] found id: ""
	I1209 05:21:17.058697 1340508 logs.go:282] 0 containers: []
	W1209 05:21:17.058706 1340508 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:21:17.058712 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:21:17.058833 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:21:17.083633 1340508 cri.go:89] found id: "f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da"
	I1209 05:21:17.083670 1340508 cri.go:89] found id: ""
	I1209 05:21:17.083679 1340508 logs.go:282] 1 containers: [f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da]
	I1209 05:21:17.083777 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:21:17.087444 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:21:17.087566 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:21:17.110713 1340508 cri.go:89] found id: ""
	I1209 05:21:17.110788 1340508 logs.go:282] 0 containers: []
	W1209 05:21:17.110810 1340508 logs.go:284] No container was found matching "kindnet"
	I1209 05:21:17.110828 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:21:17.110910 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:21:17.136147 1340508 cri.go:89] found id: ""
	I1209 05:21:17.136216 1340508 logs.go:282] 0 containers: []
	W1209 05:21:17.136238 1340508 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:21:17.136264 1340508 logs.go:123] Gathering logs for kubelet ...
	I1209 05:21:17.136289 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:21:17.194120 1340508 logs.go:123] Gathering logs for kube-apiserver [7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547] ...
	I1209 05:21:17.194198 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547"
	I1209 05:21:17.246735 1340508 logs.go:123] Gathering logs for etcd [40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b] ...
	I1209 05:21:17.246808 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b"
	I1209 05:21:17.278359 1340508 logs.go:123] Gathering logs for kube-scheduler [ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a] ...
	I1209 05:21:17.278387 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a"
	I1209 05:21:17.310699 1340508 logs.go:123] Gathering logs for containerd ...
	I1209 05:21:17.310730 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:21:17.338973 1340508 logs.go:123] Gathering logs for container status ...
	I1209 05:21:17.339005 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:21:17.366429 1340508 logs.go:123] Gathering logs for dmesg ...
	I1209 05:21:17.366455 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:21:17.382791 1340508 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:21:17.382822 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:21:17.443736 1340508 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:21:17.443756 1340508 logs.go:123] Gathering logs for kube-controller-manager [f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da] ...
	I1209 05:21:17.443776 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da"
	I1209 05:21:19.972353 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:21:19.982574 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:21:19.982645 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:21:20.016336 1340508 cri.go:89] found id: "7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547"
	I1209 05:21:20.016358 1340508 cri.go:89] found id: ""
	I1209 05:21:20.016366 1340508 logs.go:282] 1 containers: [7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547]
	I1209 05:21:20.016429 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:21:20.020673 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:21:20.020751 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:21:20.046707 1340508 cri.go:89] found id: "40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b"
	I1209 05:21:20.046730 1340508 cri.go:89] found id: ""
	I1209 05:21:20.046739 1340508 logs.go:282] 1 containers: [40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b]
	I1209 05:21:20.046794 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:21:20.050566 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:21:20.050640 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:21:20.077740 1340508 cri.go:89] found id: ""
	I1209 05:21:20.077766 1340508 logs.go:282] 0 containers: []
	W1209 05:21:20.077793 1340508 logs.go:284] No container was found matching "coredns"
	I1209 05:21:20.077800 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:21:20.077892 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:21:20.105951 1340508 cri.go:89] found id: "ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a"
	I1209 05:21:20.105988 1340508 cri.go:89] found id: ""
	I1209 05:21:20.105999 1340508 logs.go:282] 1 containers: [ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a]
	I1209 05:21:20.106106 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:21:20.110152 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:21:20.110235 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:21:20.138572 1340508 cri.go:89] found id: ""
	I1209 05:21:20.138597 1340508 logs.go:282] 0 containers: []
	W1209 05:21:20.138606 1340508 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:21:20.138612 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:21:20.138689 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:21:20.168584 1340508 cri.go:89] found id: "f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da"
	I1209 05:21:20.168617 1340508 cri.go:89] found id: ""
	I1209 05:21:20.168626 1340508 logs.go:282] 1 containers: [f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da]
	I1209 05:21:20.168684 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:21:20.172760 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:21:20.172851 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:21:20.205948 1340508 cri.go:89] found id: ""
	I1209 05:21:20.205974 1340508 logs.go:282] 0 containers: []
	W1209 05:21:20.205983 1340508 logs.go:284] No container was found matching "kindnet"
	I1209 05:21:20.205989 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:21:20.206059 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:21:20.244490 1340508 cri.go:89] found id: ""
	I1209 05:21:20.244512 1340508 logs.go:282] 0 containers: []
	W1209 05:21:20.244520 1340508 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:21:20.244535 1340508 logs.go:123] Gathering logs for kubelet ...
	I1209 05:21:20.244545 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:21:20.303135 1340508 logs.go:123] Gathering logs for dmesg ...
	I1209 05:21:20.303169 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:21:20.319529 1340508 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:21:20.319557 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:21:20.378775 1340508 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:21:20.378794 1340508 logs.go:123] Gathering logs for kube-apiserver [7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547] ...
	I1209 05:21:20.378806 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547"
	I1209 05:21:20.424893 1340508 logs.go:123] Gathering logs for etcd [40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b] ...
	I1209 05:21:20.424924 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b"
	I1209 05:21:20.456171 1340508 logs.go:123] Gathering logs for kube-scheduler [ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a] ...
	I1209 05:21:20.456199 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a"
	I1209 05:21:20.494102 1340508 logs.go:123] Gathering logs for kube-controller-manager [f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da] ...
	I1209 05:21:20.494135 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da"
	I1209 05:21:20.528349 1340508 logs.go:123] Gathering logs for containerd ...
	I1209 05:21:20.528383 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:21:20.555967 1340508 logs.go:123] Gathering logs for container status ...
	I1209 05:21:20.555999 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:21:23.085488 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:21:23.095311 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:21:23.095384 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:21:23.119367 1340508 cri.go:89] found id: "7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547"
	I1209 05:21:23.119388 1340508 cri.go:89] found id: ""
	I1209 05:21:23.119396 1340508 logs.go:282] 1 containers: [7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547]
	I1209 05:21:23.119453 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:21:23.123698 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:21:23.123765 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:21:23.148747 1340508 cri.go:89] found id: "40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b"
	I1209 05:21:23.148770 1340508 cri.go:89] found id: ""
	I1209 05:21:23.148779 1340508 logs.go:282] 1 containers: [40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b]
	I1209 05:21:23.148837 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:21:23.152331 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:21:23.152416 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:21:23.177850 1340508 cri.go:89] found id: ""
	I1209 05:21:23.177878 1340508 logs.go:282] 0 containers: []
	W1209 05:21:23.177886 1340508 logs.go:284] No container was found matching "coredns"
	I1209 05:21:23.177892 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:21:23.177958 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:21:23.212756 1340508 cri.go:89] found id: "ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a"
	I1209 05:21:23.212781 1340508 cri.go:89] found id: ""
	I1209 05:21:23.212807 1340508 logs.go:282] 1 containers: [ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a]
	I1209 05:21:23.212872 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:21:23.217869 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:21:23.217967 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:21:23.251642 1340508 cri.go:89] found id: ""
	I1209 05:21:23.251717 1340508 logs.go:282] 0 containers: []
	W1209 05:21:23.251740 1340508 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:21:23.251759 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:21:23.251841 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:21:23.275779 1340508 cri.go:89] found id: "f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da"
	I1209 05:21:23.275802 1340508 cri.go:89] found id: ""
	I1209 05:21:23.275811 1340508 logs.go:282] 1 containers: [f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da]
	I1209 05:21:23.275882 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:21:23.279452 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:21:23.279523 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:21:23.303568 1340508 cri.go:89] found id: ""
	I1209 05:21:23.303594 1340508 logs.go:282] 0 containers: []
	W1209 05:21:23.303602 1340508 logs.go:284] No container was found matching "kindnet"
	I1209 05:21:23.303609 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:21:23.303674 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:21:23.330546 1340508 cri.go:89] found id: ""
	I1209 05:21:23.330573 1340508 logs.go:282] 0 containers: []
	W1209 05:21:23.330581 1340508 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:21:23.330595 1340508 logs.go:123] Gathering logs for dmesg ...
	I1209 05:21:23.330606 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:21:23.346322 1340508 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:21:23.346354 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:21:23.413997 1340508 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:21:23.414017 1340508 logs.go:123] Gathering logs for kube-controller-manager [f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da] ...
	I1209 05:21:23.414031 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da"
	I1209 05:21:23.446189 1340508 logs.go:123] Gathering logs for containerd ...
	I1209 05:21:23.446217 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:21:23.475146 1340508 logs.go:123] Gathering logs for kubelet ...
	I1209 05:21:23.475181 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:21:23.535695 1340508 logs.go:123] Gathering logs for kube-apiserver [7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547] ...
	I1209 05:21:23.535732 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547"
	I1209 05:21:23.573435 1340508 logs.go:123] Gathering logs for etcd [40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b] ...
	I1209 05:21:23.573464 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b"
	I1209 05:21:23.604264 1340508 logs.go:123] Gathering logs for kube-scheduler [ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a] ...
	I1209 05:21:23.604297 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a"
	I1209 05:21:23.634241 1340508 logs.go:123] Gathering logs for container status ...
	I1209 05:21:23.634268 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:21:26.174330 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:21:26.188600 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:21:26.188675 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:21:26.239689 1340508 cri.go:89] found id: "7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547"
	I1209 05:21:26.239715 1340508 cri.go:89] found id: ""
	I1209 05:21:26.239728 1340508 logs.go:282] 1 containers: [7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547]
	I1209 05:21:26.239793 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:21:26.244054 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:21:26.244133 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:21:26.273186 1340508 cri.go:89] found id: "40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b"
	I1209 05:21:26.273210 1340508 cri.go:89] found id: ""
	I1209 05:21:26.273218 1340508 logs.go:282] 1 containers: [40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b]
	I1209 05:21:26.273275 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:21:26.276986 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:21:26.277064 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:21:26.302953 1340508 cri.go:89] found id: ""
	I1209 05:21:26.302977 1340508 logs.go:282] 0 containers: []
	W1209 05:21:26.302986 1340508 logs.go:284] No container was found matching "coredns"
	I1209 05:21:26.302992 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:21:26.303056 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:21:26.331402 1340508 cri.go:89] found id: "ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a"
	I1209 05:21:26.331422 1340508 cri.go:89] found id: ""
	I1209 05:21:26.331431 1340508 logs.go:282] 1 containers: [ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a]
	I1209 05:21:26.331486 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:21:26.335233 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:21:26.335306 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:21:26.359856 1340508 cri.go:89] found id: ""
	I1209 05:21:26.359876 1340508 logs.go:282] 0 containers: []
	W1209 05:21:26.359884 1340508 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:21:26.359890 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:21:26.359949 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:21:26.385270 1340508 cri.go:89] found id: "f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da"
	I1209 05:21:26.385290 1340508 cri.go:89] found id: ""
	I1209 05:21:26.385299 1340508 logs.go:282] 1 containers: [f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da]
	I1209 05:21:26.385355 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:21:26.389082 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:21:26.389158 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:21:26.413875 1340508 cri.go:89] found id: ""
	I1209 05:21:26.413902 1340508 logs.go:282] 0 containers: []
	W1209 05:21:26.413912 1340508 logs.go:284] No container was found matching "kindnet"
	I1209 05:21:26.413918 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:21:26.413981 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:21:26.439512 1340508 cri.go:89] found id: ""
	I1209 05:21:26.439538 1340508 logs.go:282] 0 containers: []
	W1209 05:21:26.439547 1340508 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:21:26.439570 1340508 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:21:26.439582 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:21:26.502220 1340508 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:21:26.502238 1340508 logs.go:123] Gathering logs for kube-apiserver [7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547] ...
	I1209 05:21:26.502251 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547"
	I1209 05:21:26.550086 1340508 logs.go:123] Gathering logs for kube-controller-manager [f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da] ...
	I1209 05:21:26.550118 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da"
	I1209 05:21:26.580639 1340508 logs.go:123] Gathering logs for containerd ...
	I1209 05:21:26.580666 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:21:26.609127 1340508 logs.go:123] Gathering logs for kubelet ...
	I1209 05:21:26.609163 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:21:26.667215 1340508 logs.go:123] Gathering logs for dmesg ...
	I1209 05:21:26.667251 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:21:26.683014 1340508 logs.go:123] Gathering logs for etcd [40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b] ...
	I1209 05:21:26.683046 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b"
	I1209 05:21:26.719440 1340508 logs.go:123] Gathering logs for kube-scheduler [ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a] ...
	I1209 05:21:26.719474 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a"
	I1209 05:21:26.751925 1340508 logs.go:123] Gathering logs for container status ...
	I1209 05:21:26.751955 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:21:29.280469 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:21:29.290744 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:21:29.290817 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:21:29.319219 1340508 cri.go:89] found id: "7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547"
	I1209 05:21:29.319242 1340508 cri.go:89] found id: ""
	I1209 05:21:29.319270 1340508 logs.go:282] 1 containers: [7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547]
	I1209 05:21:29.319330 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:21:29.323198 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:21:29.323303 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:21:29.356860 1340508 cri.go:89] found id: "40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b"
	I1209 05:21:29.356884 1340508 cri.go:89] found id: ""
	I1209 05:21:29.356892 1340508 logs.go:282] 1 containers: [40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b]
	I1209 05:21:29.356971 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:21:29.360902 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:21:29.360976 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:21:29.395989 1340508 cri.go:89] found id: ""
	I1209 05:21:29.396053 1340508 logs.go:282] 0 containers: []
	W1209 05:21:29.396062 1340508 logs.go:284] No container was found matching "coredns"
	I1209 05:21:29.396069 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:21:29.396136 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:21:29.437041 1340508 cri.go:89] found id: "ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a"
	I1209 05:21:29.437064 1340508 cri.go:89] found id: ""
	I1209 05:21:29.437073 1340508 logs.go:282] 1 containers: [ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a]
	I1209 05:21:29.437130 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:21:29.441525 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:21:29.441594 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:21:29.481996 1340508 cri.go:89] found id: ""
	I1209 05:21:29.482025 1340508 logs.go:282] 0 containers: []
	W1209 05:21:29.482034 1340508 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:21:29.482040 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:21:29.482099 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:21:29.515206 1340508 cri.go:89] found id: "f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da"
	I1209 05:21:29.515232 1340508 cri.go:89] found id: ""
	I1209 05:21:29.515244 1340508 logs.go:282] 1 containers: [f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da]
	I1209 05:21:29.515302 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:21:29.519833 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:21:29.519915 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:21:29.558469 1340508 cri.go:89] found id: ""
	I1209 05:21:29.558502 1340508 logs.go:282] 0 containers: []
	W1209 05:21:29.558512 1340508 logs.go:284] No container was found matching "kindnet"
	I1209 05:21:29.558518 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:21:29.558580 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:21:29.595093 1340508 cri.go:89] found id: ""
	I1209 05:21:29.595116 1340508 logs.go:282] 0 containers: []
	W1209 05:21:29.595124 1340508 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:21:29.595137 1340508 logs.go:123] Gathering logs for kubelet ...
	I1209 05:21:29.595148 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:21:29.664235 1340508 logs.go:123] Gathering logs for dmesg ...
	I1209 05:21:29.664280 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:21:29.680802 1340508 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:21:29.680833 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:21:29.743145 1340508 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:21:29.743166 1340508 logs.go:123] Gathering logs for containerd ...
	I1209 05:21:29.743180 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:21:29.772782 1340508 logs.go:123] Gathering logs for kube-apiserver [7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547] ...
	I1209 05:21:29.772820 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547"
	I1209 05:21:29.822952 1340508 logs.go:123] Gathering logs for etcd [40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b] ...
	I1209 05:21:29.822983 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b"
	I1209 05:21:29.855436 1340508 logs.go:123] Gathering logs for kube-scheduler [ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a] ...
	I1209 05:21:29.855472 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a"
	I1209 05:21:29.888776 1340508 logs.go:123] Gathering logs for kube-controller-manager [f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da] ...
	I1209 05:21:29.888809 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da"
	I1209 05:21:29.916790 1340508 logs.go:123] Gathering logs for container status ...
	I1209 05:21:29.916818 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:21:32.449284 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:21:32.462074 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:21:32.462158 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:21:32.498914 1340508 cri.go:89] found id: "7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547"
	I1209 05:21:32.498941 1340508 cri.go:89] found id: ""
	I1209 05:21:32.498950 1340508 logs.go:282] 1 containers: [7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547]
	I1209 05:21:32.499025 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:21:32.504838 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:21:32.504931 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:21:32.550617 1340508 cri.go:89] found id: "40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b"
	I1209 05:21:32.550643 1340508 cri.go:89] found id: ""
	I1209 05:21:32.550667 1340508 logs.go:282] 1 containers: [40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b]
	I1209 05:21:32.550730 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:21:32.555640 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:21:32.555751 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:21:32.598325 1340508 cri.go:89] found id: ""
	I1209 05:21:32.598370 1340508 logs.go:282] 0 containers: []
	W1209 05:21:32.598379 1340508 logs.go:284] No container was found matching "coredns"
	I1209 05:21:32.598386 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:21:32.598464 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:21:32.631391 1340508 cri.go:89] found id: "ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a"
	I1209 05:21:32.631417 1340508 cri.go:89] found id: ""
	I1209 05:21:32.631436 1340508 logs.go:282] 1 containers: [ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a]
	I1209 05:21:32.631505 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:21:32.636315 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:21:32.636412 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:21:32.666333 1340508 cri.go:89] found id: ""
	I1209 05:21:32.666407 1340508 logs.go:282] 0 containers: []
	W1209 05:21:32.666429 1340508 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:21:32.666447 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:21:32.666532 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:21:32.709269 1340508 cri.go:89] found id: "f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da"
	I1209 05:21:32.709342 1340508 cri.go:89] found id: ""
	I1209 05:21:32.709364 1340508 logs.go:282] 1 containers: [f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da]
	I1209 05:21:32.709454 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:21:32.714071 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:21:32.714156 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:21:32.745444 1340508 cri.go:89] found id: ""
	I1209 05:21:32.745478 1340508 logs.go:282] 0 containers: []
	W1209 05:21:32.745487 1340508 logs.go:284] No container was found matching "kindnet"
	I1209 05:21:32.745493 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:21:32.745560 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:21:32.785097 1340508 cri.go:89] found id: ""
	I1209 05:21:32.785145 1340508 logs.go:282] 0 containers: []
	W1209 05:21:32.785159 1340508 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:21:32.785176 1340508 logs.go:123] Gathering logs for kube-controller-manager [f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da] ...
	I1209 05:21:32.785200 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da"
	I1209 05:21:32.821902 1340508 logs.go:123] Gathering logs for containerd ...
	I1209 05:21:32.821942 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:21:32.858306 1340508 logs.go:123] Gathering logs for kubelet ...
	I1209 05:21:32.858356 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:21:32.936879 1340508 logs.go:123] Gathering logs for kube-scheduler [ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a] ...
	I1209 05:21:32.936920 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a"
	I1209 05:21:33.010424 1340508 logs.go:123] Gathering logs for container status ...
	I1209 05:21:33.010468 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:21:33.046798 1340508 logs.go:123] Gathering logs for dmesg ...
	I1209 05:21:33.046825 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:21:33.064049 1340508 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:21:33.064079 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:21:33.130285 1340508 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:21:33.130305 1340508 logs.go:123] Gathering logs for kube-apiserver [7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547] ...
	I1209 05:21:33.130317 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547"
	I1209 05:21:33.165547 1340508 logs.go:123] Gathering logs for etcd [40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b] ...
	I1209 05:21:33.165580 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b"
	I1209 05:21:35.703160 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:21:35.716731 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:21:35.716797 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:21:35.753128 1340508 cri.go:89] found id: "7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547"
	I1209 05:21:35.753157 1340508 cri.go:89] found id: ""
	I1209 05:21:35.753166 1340508 logs.go:282] 1 containers: [7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547]
	I1209 05:21:35.753241 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:21:35.758041 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:21:35.758185 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:21:35.800498 1340508 cri.go:89] found id: "40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b"
	I1209 05:21:35.800562 1340508 cri.go:89] found id: ""
	I1209 05:21:35.800587 1340508 logs.go:282] 1 containers: [40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b]
	I1209 05:21:35.800669 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:21:35.806168 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:21:35.806293 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:21:35.838575 1340508 cri.go:89] found id: ""
	I1209 05:21:35.838656 1340508 logs.go:282] 0 containers: []
	W1209 05:21:35.838689 1340508 logs.go:284] No container was found matching "coredns"
	I1209 05:21:35.838714 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:21:35.838816 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:21:35.873296 1340508 cri.go:89] found id: "ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a"
	I1209 05:21:35.873366 1340508 cri.go:89] found id: ""
	I1209 05:21:35.873394 1340508 logs.go:282] 1 containers: [ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a]
	I1209 05:21:35.873481 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:21:35.877934 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:21:35.878045 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:21:35.910222 1340508 cri.go:89] found id: ""
	I1209 05:21:35.910317 1340508 logs.go:282] 0 containers: []
	W1209 05:21:35.910339 1340508 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:21:35.910358 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:21:35.910456 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:21:35.951831 1340508 cri.go:89] found id: "f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da"
	I1209 05:21:35.951900 1340508 cri.go:89] found id: ""
	I1209 05:21:35.951929 1340508 logs.go:282] 1 containers: [f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da]
	I1209 05:21:35.952056 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:21:35.963505 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:21:35.963644 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:21:36.005998 1340508 cri.go:89] found id: ""
	I1209 05:21:36.006102 1340508 logs.go:282] 0 containers: []
	W1209 05:21:36.006130 1340508 logs.go:284] No container was found matching "kindnet"
	I1209 05:21:36.006150 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:21:36.006267 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:21:36.045265 1340508 cri.go:89] found id: ""
	I1209 05:21:36.045342 1340508 logs.go:282] 0 containers: []
	W1209 05:21:36.045364 1340508 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:21:36.045392 1340508 logs.go:123] Gathering logs for kubelet ...
	I1209 05:21:36.045425 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:21:36.123569 1340508 logs.go:123] Gathering logs for dmesg ...
	I1209 05:21:36.123656 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:21:36.140749 1340508 logs.go:123] Gathering logs for etcd [40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b] ...
	I1209 05:21:36.140827 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b"
	I1209 05:21:36.179080 1340508 logs.go:123] Gathering logs for containerd ...
	I1209 05:21:36.179154 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:21:36.209431 1340508 logs.go:123] Gathering logs for container status ...
	I1209 05:21:36.209472 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:21:36.245944 1340508 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:21:36.245974 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:21:36.328026 1340508 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:21:36.328049 1340508 logs.go:123] Gathering logs for kube-apiserver [7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547] ...
	I1209 05:21:36.328063 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547"
	I1209 05:21:36.379540 1340508 logs.go:123] Gathering logs for kube-scheduler [ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a] ...
	I1209 05:21:36.379574 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a"
	I1209 05:21:36.436274 1340508 logs.go:123] Gathering logs for kube-controller-manager [f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da] ...
	I1209 05:21:36.436307 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da"
	I1209 05:21:38.974528 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:21:38.984277 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:21:38.984346 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:21:39.012000 1340508 cri.go:89] found id: "7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547"
	I1209 05:21:39.012065 1340508 cri.go:89] found id: ""
	I1209 05:21:39.012074 1340508 logs.go:282] 1 containers: [7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547]
	I1209 05:21:39.012137 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:21:39.016281 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:21:39.016357 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:21:39.041615 1340508 cri.go:89] found id: "40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b"
	I1209 05:21:39.041638 1340508 cri.go:89] found id: ""
	I1209 05:21:39.041646 1340508 logs.go:282] 1 containers: [40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b]
	I1209 05:21:39.041709 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:21:39.045409 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:21:39.045480 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:21:39.069720 1340508 cri.go:89] found id: ""
	I1209 05:21:39.069750 1340508 logs.go:282] 0 containers: []
	W1209 05:21:39.069759 1340508 logs.go:284] No container was found matching "coredns"
	I1209 05:21:39.069766 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:21:39.069828 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:21:39.094201 1340508 cri.go:89] found id: "ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a"
	I1209 05:21:39.094230 1340508 cri.go:89] found id: ""
	I1209 05:21:39.094239 1340508 logs.go:282] 1 containers: [ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a]
	I1209 05:21:39.094296 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:21:39.098052 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:21:39.098127 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:21:39.122699 1340508 cri.go:89] found id: ""
	I1209 05:21:39.122728 1340508 logs.go:282] 0 containers: []
	W1209 05:21:39.122737 1340508 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:21:39.122743 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:21:39.122802 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:21:39.150342 1340508 cri.go:89] found id: "f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da"
	I1209 05:21:39.150370 1340508 cri.go:89] found id: ""
	I1209 05:21:39.150379 1340508 logs.go:282] 1 containers: [f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da]
	I1209 05:21:39.150436 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:21:39.154143 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:21:39.154212 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:21:39.178226 1340508 cri.go:89] found id: ""
	I1209 05:21:39.178252 1340508 logs.go:282] 0 containers: []
	W1209 05:21:39.178260 1340508 logs.go:284] No container was found matching "kindnet"
	I1209 05:21:39.178266 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:21:39.178383 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:21:39.210257 1340508 cri.go:89] found id: ""
	I1209 05:21:39.210284 1340508 logs.go:282] 0 containers: []
	W1209 05:21:39.210292 1340508 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:21:39.210339 1340508 logs.go:123] Gathering logs for etcd [40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b] ...
	I1209 05:21:39.210357 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b"
	I1209 05:21:39.256163 1340508 logs.go:123] Gathering logs for containerd ...
	I1209 05:21:39.256194 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:21:39.286318 1340508 logs.go:123] Gathering logs for dmesg ...
	I1209 05:21:39.286354 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:21:39.302615 1340508 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:21:39.302645 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:21:39.366660 1340508 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:21:39.366682 1340508 logs.go:123] Gathering logs for kube-apiserver [7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547] ...
	I1209 05:21:39.366696 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547"
	I1209 05:21:39.408924 1340508 logs.go:123] Gathering logs for kube-scheduler [ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a] ...
	I1209 05:21:39.408962 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a"
	I1209 05:21:39.446220 1340508 logs.go:123] Gathering logs for kube-controller-manager [f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da] ...
	I1209 05:21:39.446374 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da"
	I1209 05:21:39.477195 1340508 logs.go:123] Gathering logs for container status ...
	I1209 05:21:39.477225 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:21:39.505664 1340508 logs.go:123] Gathering logs for kubelet ...
	I1209 05:21:39.505692 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:21:42.071250 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:21:42.086507 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:21:42.086669 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:21:42.121300 1340508 cri.go:89] found id: "7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547"
	I1209 05:21:42.121335 1340508 cri.go:89] found id: ""
	I1209 05:21:42.121345 1340508 logs.go:282] 1 containers: [7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547]
	I1209 05:21:42.121514 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:21:42.126834 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:21:42.126941 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:21:42.174654 1340508 cri.go:89] found id: "40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b"
	I1209 05:21:42.174679 1340508 cri.go:89] found id: ""
	I1209 05:21:42.174689 1340508 logs.go:282] 1 containers: [40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b]
	I1209 05:21:42.174763 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:21:42.180883 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:21:42.180968 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:21:42.218799 1340508 cri.go:89] found id: ""
	I1209 05:21:42.218825 1340508 logs.go:282] 0 containers: []
	W1209 05:21:42.218833 1340508 logs.go:284] No container was found matching "coredns"
	I1209 05:21:42.218840 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:21:42.218911 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:21:42.264291 1340508 cri.go:89] found id: "ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a"
	I1209 05:21:42.264377 1340508 cri.go:89] found id: ""
	I1209 05:21:42.264403 1340508 logs.go:282] 1 containers: [ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a]
	I1209 05:21:42.264503 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:21:42.270078 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:21:42.270220 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:21:42.303841 1340508 cri.go:89] found id: ""
	I1209 05:21:42.303928 1340508 logs.go:282] 0 containers: []
	W1209 05:21:42.303953 1340508 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:21:42.303975 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:21:42.304107 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:21:42.335770 1340508 cri.go:89] found id: "f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da"
	I1209 05:21:42.335862 1340508 cri.go:89] found id: ""
	I1209 05:21:42.335887 1340508 logs.go:282] 1 containers: [f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da]
	I1209 05:21:42.335987 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:21:42.341304 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:21:42.341401 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:21:42.367820 1340508 cri.go:89] found id: ""
	I1209 05:21:42.367847 1340508 logs.go:282] 0 containers: []
	W1209 05:21:42.367856 1340508 logs.go:284] No container was found matching "kindnet"
	I1209 05:21:42.367869 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:21:42.367930 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:21:42.397198 1340508 cri.go:89] found id: ""
	I1209 05:21:42.397281 1340508 logs.go:282] 0 containers: []
	W1209 05:21:42.397295 1340508 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:21:42.397310 1340508 logs.go:123] Gathering logs for kubelet ...
	I1209 05:21:42.397327 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:21:42.459028 1340508 logs.go:123] Gathering logs for kube-apiserver [7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547] ...
	I1209 05:21:42.459067 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547"
	I1209 05:21:42.493226 1340508 logs.go:123] Gathering logs for kube-scheduler [ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a] ...
	I1209 05:21:42.493260 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a"
	I1209 05:21:42.534540 1340508 logs.go:123] Gathering logs for dmesg ...
	I1209 05:21:42.534574 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:21:42.551279 1340508 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:21:42.551307 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:21:42.615197 1340508 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:21:42.615216 1340508 logs.go:123] Gathering logs for etcd [40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b] ...
	I1209 05:21:42.615228 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b"
	I1209 05:21:42.652923 1340508 logs.go:123] Gathering logs for kube-controller-manager [f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da] ...
	I1209 05:21:42.652956 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da"
	I1209 05:21:42.681021 1340508 logs.go:123] Gathering logs for containerd ...
	I1209 05:21:42.681050 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:21:42.709955 1340508 logs.go:123] Gathering logs for container status ...
	I1209 05:21:42.709989 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:21:45.248867 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:21:45.272816 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:21:45.272926 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:21:45.305803 1340508 cri.go:89] found id: "7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547"
	I1209 05:21:45.305827 1340508 cri.go:89] found id: ""
	I1209 05:21:45.305836 1340508 logs.go:282] 1 containers: [7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547]
	I1209 05:21:45.305899 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:21:45.311554 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:21:45.311878 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:21:45.341705 1340508 cri.go:89] found id: "40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b"
	I1209 05:21:45.341728 1340508 cri.go:89] found id: ""
	I1209 05:21:45.341736 1340508 logs.go:282] 1 containers: [40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b]
	I1209 05:21:45.341805 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:21:45.345289 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:21:45.345357 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:21:45.369091 1340508 cri.go:89] found id: ""
	I1209 05:21:45.369117 1340508 logs.go:282] 0 containers: []
	W1209 05:21:45.369125 1340508 logs.go:284] No container was found matching "coredns"
	I1209 05:21:45.369131 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:21:45.369187 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:21:45.395207 1340508 cri.go:89] found id: "ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a"
	I1209 05:21:45.395231 1340508 cri.go:89] found id: ""
	I1209 05:21:45.395240 1340508 logs.go:282] 1 containers: [ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a]
	I1209 05:21:45.395297 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:21:45.398778 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:21:45.398870 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:21:45.427425 1340508 cri.go:89] found id: ""
	I1209 05:21:45.427450 1340508 logs.go:282] 0 containers: []
	W1209 05:21:45.427459 1340508 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:21:45.427465 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:21:45.427555 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:21:45.453061 1340508 cri.go:89] found id: "f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da"
	I1209 05:21:45.453085 1340508 cri.go:89] found id: ""
	I1209 05:21:45.453093 1340508 logs.go:282] 1 containers: [f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da]
	I1209 05:21:45.453149 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:21:45.456769 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:21:45.456843 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:21:45.484160 1340508 cri.go:89] found id: ""
	I1209 05:21:45.484184 1340508 logs.go:282] 0 containers: []
	W1209 05:21:45.484192 1340508 logs.go:284] No container was found matching "kindnet"
	I1209 05:21:45.484199 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:21:45.484262 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:21:45.508147 1340508 cri.go:89] found id: ""
	I1209 05:21:45.508175 1340508 logs.go:282] 0 containers: []
	W1209 05:21:45.508184 1340508 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:21:45.508198 1340508 logs.go:123] Gathering logs for kubelet ...
	I1209 05:21:45.508215 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:21:45.568727 1340508 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:21:45.568761 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:21:45.631842 1340508 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:21:45.631908 1340508 logs.go:123] Gathering logs for kube-scheduler [ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a] ...
	I1209 05:21:45.631935 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a"
	I1209 05:21:45.666249 1340508 logs.go:123] Gathering logs for kube-controller-manager [f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da] ...
	I1209 05:21:45.666278 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da"
	I1209 05:21:45.698401 1340508 logs.go:123] Gathering logs for dmesg ...
	I1209 05:21:45.698432 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:21:45.714724 1340508 logs.go:123] Gathering logs for kube-apiserver [7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547] ...
	I1209 05:21:45.714755 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547"
	I1209 05:21:45.753002 1340508 logs.go:123] Gathering logs for etcd [40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b] ...
	I1209 05:21:45.753042 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b"
	I1209 05:21:45.785043 1340508 logs.go:123] Gathering logs for containerd ...
	I1209 05:21:45.785079 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:21:45.816418 1340508 logs.go:123] Gathering logs for container status ...
	I1209 05:21:45.816462 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:21:48.347622 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:21:48.358379 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:21:48.358447 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:21:48.383347 1340508 cri.go:89] found id: "7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547"
	I1209 05:21:48.383369 1340508 cri.go:89] found id: ""
	I1209 05:21:48.383378 1340508 logs.go:282] 1 containers: [7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547]
	I1209 05:21:48.383434 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:21:48.387143 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:21:48.387211 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:21:48.412178 1340508 cri.go:89] found id: "40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b"
	I1209 05:21:48.412202 1340508 cri.go:89] found id: ""
	I1209 05:21:48.412210 1340508 logs.go:282] 1 containers: [40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b]
	I1209 05:21:48.412269 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:21:48.415888 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:21:48.415972 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:21:48.447289 1340508 cri.go:89] found id: ""
	I1209 05:21:48.447312 1340508 logs.go:282] 0 containers: []
	W1209 05:21:48.447320 1340508 logs.go:284] No container was found matching "coredns"
	I1209 05:21:48.447338 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:21:48.447400 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:21:48.471630 1340508 cri.go:89] found id: "ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a"
	I1209 05:21:48.471652 1340508 cri.go:89] found id: ""
	I1209 05:21:48.471661 1340508 logs.go:282] 1 containers: [ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a]
	I1209 05:21:48.471721 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:21:48.475437 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:21:48.475513 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:21:48.500465 1340508 cri.go:89] found id: ""
	I1209 05:21:48.500489 1340508 logs.go:282] 0 containers: []
	W1209 05:21:48.500496 1340508 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:21:48.500503 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:21:48.500562 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:21:48.532742 1340508 cri.go:89] found id: "f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da"
	I1209 05:21:48.532763 1340508 cri.go:89] found id: ""
	I1209 05:21:48.532771 1340508 logs.go:282] 1 containers: [f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da]
	I1209 05:21:48.532827 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:21:48.536471 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:21:48.536545 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:21:48.562420 1340508 cri.go:89] found id: ""
	I1209 05:21:48.562446 1340508 logs.go:282] 0 containers: []
	W1209 05:21:48.562455 1340508 logs.go:284] No container was found matching "kindnet"
	I1209 05:21:48.562461 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:21:48.562520 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:21:48.587273 1340508 cri.go:89] found id: ""
	I1209 05:21:48.587298 1340508 logs.go:282] 0 containers: []
	W1209 05:21:48.587307 1340508 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:21:48.587321 1340508 logs.go:123] Gathering logs for dmesg ...
	I1209 05:21:48.587332 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:21:48.603344 1340508 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:21:48.603416 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:21:48.669065 1340508 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:21:48.669089 1340508 logs.go:123] Gathering logs for kube-apiserver [7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547] ...
	I1209 05:21:48.669103 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547"
	I1209 05:21:48.701897 1340508 logs.go:123] Gathering logs for kubelet ...
	I1209 05:21:48.701928 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:21:48.763507 1340508 logs.go:123] Gathering logs for etcd [40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b] ...
	I1209 05:21:48.763543 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b"
	I1209 05:21:48.795518 1340508 logs.go:123] Gathering logs for kube-scheduler [ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a] ...
	I1209 05:21:48.795546 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a"
	I1209 05:21:48.828629 1340508 logs.go:123] Gathering logs for kube-controller-manager [f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da] ...
	I1209 05:21:48.828662 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da"
	I1209 05:21:48.857901 1340508 logs.go:123] Gathering logs for containerd ...
	I1209 05:21:48.857931 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:21:48.888493 1340508 logs.go:123] Gathering logs for container status ...
	I1209 05:21:48.888530 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:21:51.416623 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:21:51.426674 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:21:51.426751 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:21:51.454457 1340508 cri.go:89] found id: "7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547"
	I1209 05:21:51.454479 1340508 cri.go:89] found id: ""
	I1209 05:21:51.454488 1340508 logs.go:282] 1 containers: [7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547]
	I1209 05:21:51.454546 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:21:51.458251 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:21:51.458339 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:21:51.486222 1340508 cri.go:89] found id: "40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b"
	I1209 05:21:51.486245 1340508 cri.go:89] found id: ""
	I1209 05:21:51.486254 1340508 logs.go:282] 1 containers: [40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b]
	I1209 05:21:51.486331 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:21:51.490149 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:21:51.490240 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:21:51.515488 1340508 cri.go:89] found id: ""
	I1209 05:21:51.515511 1340508 logs.go:282] 0 containers: []
	W1209 05:21:51.515519 1340508 logs.go:284] No container was found matching "coredns"
	I1209 05:21:51.515525 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:21:51.515594 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:21:51.541005 1340508 cri.go:89] found id: "ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a"
	I1209 05:21:51.541032 1340508 cri.go:89] found id: ""
	I1209 05:21:51.541041 1340508 logs.go:282] 1 containers: [ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a]
	I1209 05:21:51.541116 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:21:51.544690 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:21:51.544809 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:21:51.568340 1340508 cri.go:89] found id: ""
	I1209 05:21:51.568365 1340508 logs.go:282] 0 containers: []
	W1209 05:21:51.568374 1340508 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:21:51.568380 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:21:51.568436 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:21:51.592123 1340508 cri.go:89] found id: "f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da"
	I1209 05:21:51.592144 1340508 cri.go:89] found id: ""
	I1209 05:21:51.592152 1340508 logs.go:282] 1 containers: [f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da]
	I1209 05:21:51.592224 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:21:51.596001 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:21:51.596105 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:21:51.620078 1340508 cri.go:89] found id: ""
	I1209 05:21:51.620099 1340508 logs.go:282] 0 containers: []
	W1209 05:21:51.620107 1340508 logs.go:284] No container was found matching "kindnet"
	I1209 05:21:51.620113 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:21:51.620192 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:21:51.644292 1340508 cri.go:89] found id: ""
	I1209 05:21:51.644317 1340508 logs.go:282] 0 containers: []
	W1209 05:21:51.644325 1340508 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:21:51.644338 1340508 logs.go:123] Gathering logs for kubelet ...
	I1209 05:21:51.644370 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:21:51.702615 1340508 logs.go:123] Gathering logs for kube-controller-manager [f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da] ...
	I1209 05:21:51.702650 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da"
	I1209 05:21:51.737206 1340508 logs.go:123] Gathering logs for containerd ...
	I1209 05:21:51.737236 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:21:51.766002 1340508 logs.go:123] Gathering logs for dmesg ...
	I1209 05:21:51.766037 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:21:51.783762 1340508 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:21:51.783803 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:21:51.848059 1340508 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:21:51.848077 1340508 logs.go:123] Gathering logs for kube-apiserver [7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547] ...
	I1209 05:21:51.848091 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547"
	I1209 05:21:51.885366 1340508 logs.go:123] Gathering logs for etcd [40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b] ...
	I1209 05:21:51.885405 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b"
	I1209 05:21:51.916316 1340508 logs.go:123] Gathering logs for kube-scheduler [ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a] ...
	I1209 05:21:51.916344 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a"
	I1209 05:21:51.962801 1340508 logs.go:123] Gathering logs for container status ...
	I1209 05:21:51.962830 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:21:54.501094 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:21:54.511349 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:21:54.511417 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:21:54.542897 1340508 cri.go:89] found id: "7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547"
	I1209 05:21:54.542917 1340508 cri.go:89] found id: ""
	I1209 05:21:54.542925 1340508 logs.go:282] 1 containers: [7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547]
	I1209 05:21:54.542982 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:21:54.546561 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:21:54.546629 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:21:54.573260 1340508 cri.go:89] found id: "40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b"
	I1209 05:21:54.573283 1340508 cri.go:89] found id: ""
	I1209 05:21:54.573290 1340508 logs.go:282] 1 containers: [40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b]
	I1209 05:21:54.573346 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:21:54.577002 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:21:54.577071 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:21:54.601729 1340508 cri.go:89] found id: ""
	I1209 05:21:54.601759 1340508 logs.go:282] 0 containers: []
	W1209 05:21:54.601767 1340508 logs.go:284] No container was found matching "coredns"
	I1209 05:21:54.601789 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:21:54.601892 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:21:54.627878 1340508 cri.go:89] found id: "ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a"
	I1209 05:21:54.627901 1340508 cri.go:89] found id: ""
	I1209 05:21:54.627909 1340508 logs.go:282] 1 containers: [ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a]
	I1209 05:21:54.627965 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:21:54.631517 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:21:54.631587 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:21:54.656746 1340508 cri.go:89] found id: ""
	I1209 05:21:54.656769 1340508 logs.go:282] 0 containers: []
	W1209 05:21:54.656777 1340508 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:21:54.656783 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:21:54.656842 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:21:54.689707 1340508 cri.go:89] found id: "f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da"
	I1209 05:21:54.689730 1340508 cri.go:89] found id: ""
	I1209 05:21:54.689738 1340508 logs.go:282] 1 containers: [f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da]
	I1209 05:21:54.689820 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:21:54.693572 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:21:54.693645 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:21:54.717313 1340508 cri.go:89] found id: ""
	I1209 05:21:54.717339 1340508 logs.go:282] 0 containers: []
	W1209 05:21:54.717347 1340508 logs.go:284] No container was found matching "kindnet"
	I1209 05:21:54.717353 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:21:54.717411 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:21:54.741321 1340508 cri.go:89] found id: ""
	I1209 05:21:54.741384 1340508 logs.go:282] 0 containers: []
	W1209 05:21:54.741398 1340508 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:21:54.741413 1340508 logs.go:123] Gathering logs for kubelet ...
	I1209 05:21:54.741424 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:21:54.798865 1340508 logs.go:123] Gathering logs for dmesg ...
	I1209 05:21:54.798900 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:21:54.815447 1340508 logs.go:123] Gathering logs for kube-apiserver [7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547] ...
	I1209 05:21:54.815481 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547"
	I1209 05:21:54.853863 1340508 logs.go:123] Gathering logs for kube-scheduler [ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a] ...
	I1209 05:21:54.853898 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a"
	I1209 05:21:54.888720 1340508 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:21:54.888750 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:21:54.962649 1340508 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:21:54.962671 1340508 logs.go:123] Gathering logs for etcd [40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b] ...
	I1209 05:21:54.962686 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b"
	I1209 05:21:55.004585 1340508 logs.go:123] Gathering logs for kube-controller-manager [f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da] ...
	I1209 05:21:55.004625 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da"
	I1209 05:21:55.044144 1340508 logs.go:123] Gathering logs for containerd ...
	I1209 05:21:55.044175 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:21:55.074545 1340508 logs.go:123] Gathering logs for container status ...
	I1209 05:21:55.074578 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:21:57.606877 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:21:57.616983 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:21:57.617054 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:21:57.640789 1340508 cri.go:89] found id: "7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547"
	I1209 05:21:57.640811 1340508 cri.go:89] found id: ""
	I1209 05:21:57.640819 1340508 logs.go:282] 1 containers: [7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547]
	I1209 05:21:57.640880 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:21:57.644417 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:21:57.644489 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:21:57.667963 1340508 cri.go:89] found id: "40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b"
	I1209 05:21:57.667988 1340508 cri.go:89] found id: ""
	I1209 05:21:57.667996 1340508 logs.go:282] 1 containers: [40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b]
	I1209 05:21:57.668079 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:21:57.671649 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:21:57.671712 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:21:57.695390 1340508 cri.go:89] found id: ""
	I1209 05:21:57.695419 1340508 logs.go:282] 0 containers: []
	W1209 05:21:57.695428 1340508 logs.go:284] No container was found matching "coredns"
	I1209 05:21:57.695434 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:21:57.695496 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:21:57.722158 1340508 cri.go:89] found id: "ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a"
	I1209 05:21:57.722183 1340508 cri.go:89] found id: ""
	I1209 05:21:57.722192 1340508 logs.go:282] 1 containers: [ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a]
	I1209 05:21:57.722275 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:21:57.725931 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:21:57.726009 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:21:57.749278 1340508 cri.go:89] found id: ""
	I1209 05:21:57.749301 1340508 logs.go:282] 0 containers: []
	W1209 05:21:57.749309 1340508 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:21:57.749315 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:21:57.749379 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:21:57.774131 1340508 cri.go:89] found id: "f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da"
	I1209 05:21:57.774153 1340508 cri.go:89] found id: ""
	I1209 05:21:57.774161 1340508 logs.go:282] 1 containers: [f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da]
	I1209 05:21:57.774217 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:21:57.777841 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:21:57.777918 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:21:57.801908 1340508 cri.go:89] found id: ""
	I1209 05:21:57.801931 1340508 logs.go:282] 0 containers: []
	W1209 05:21:57.801940 1340508 logs.go:284] No container was found matching "kindnet"
	I1209 05:21:57.801947 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:21:57.802005 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:21:57.829009 1340508 cri.go:89] found id: ""
	I1209 05:21:57.829034 1340508 logs.go:282] 0 containers: []
	W1209 05:21:57.829042 1340508 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:21:57.829056 1340508 logs.go:123] Gathering logs for kubelet ...
	I1209 05:21:57.829069 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:21:57.890790 1340508 logs.go:123] Gathering logs for kube-apiserver [7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547] ...
	I1209 05:21:57.890840 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547"
	I1209 05:21:57.930935 1340508 logs.go:123] Gathering logs for container status ...
	I1209 05:21:57.931006 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:21:57.981406 1340508 logs.go:123] Gathering logs for dmesg ...
	I1209 05:21:57.981471 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:21:57.997834 1340508 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:21:57.997909 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:21:58.065961 1340508 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:21:58.065983 1340508 logs.go:123] Gathering logs for etcd [40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b] ...
	I1209 05:21:58.065996 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b"
	I1209 05:21:58.103612 1340508 logs.go:123] Gathering logs for kube-scheduler [ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a] ...
	I1209 05:21:58.103651 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a"
	I1209 05:21:58.137567 1340508 logs.go:123] Gathering logs for kube-controller-manager [f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da] ...
	I1209 05:21:58.137602 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da"
	I1209 05:21:58.170394 1340508 logs.go:123] Gathering logs for containerd ...
	I1209 05:21:58.170433 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:22:00.701058 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:22:00.715032 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:22:00.715098 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:22:00.753514 1340508 cri.go:89] found id: "7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547"
	I1209 05:22:00.753534 1340508 cri.go:89] found id: ""
	I1209 05:22:00.753542 1340508 logs.go:282] 1 containers: [7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547]
	I1209 05:22:00.753598 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:22:00.761898 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:22:00.761981 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:22:00.796697 1340508 cri.go:89] found id: "40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b"
	I1209 05:22:00.796719 1340508 cri.go:89] found id: ""
	I1209 05:22:00.796728 1340508 logs.go:282] 1 containers: [40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b]
	I1209 05:22:00.796788 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:22:00.800775 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:22:00.800851 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:22:00.834676 1340508 cri.go:89] found id: ""
	I1209 05:22:00.834703 1340508 logs.go:282] 0 containers: []
	W1209 05:22:00.834711 1340508 logs.go:284] No container was found matching "coredns"
	I1209 05:22:00.834717 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:22:00.834776 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:22:00.868127 1340508 cri.go:89] found id: "ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a"
	I1209 05:22:00.868147 1340508 cri.go:89] found id: ""
	I1209 05:22:00.868155 1340508 logs.go:282] 1 containers: [ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a]
	I1209 05:22:00.868209 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:22:00.872374 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:22:00.872493 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:22:00.912208 1340508 cri.go:89] found id: ""
	I1209 05:22:00.912237 1340508 logs.go:282] 0 containers: []
	W1209 05:22:00.912245 1340508 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:22:00.912251 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:22:00.912309 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:22:00.951355 1340508 cri.go:89] found id: "f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da"
	I1209 05:22:00.951376 1340508 cri.go:89] found id: ""
	I1209 05:22:00.951401 1340508 logs.go:282] 1 containers: [f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da]
	I1209 05:22:00.951457 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:22:00.955073 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:22:00.955155 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:22:00.998300 1340508 cri.go:89] found id: ""
	I1209 05:22:00.998337 1340508 logs.go:282] 0 containers: []
	W1209 05:22:00.998346 1340508 logs.go:284] No container was found matching "kindnet"
	I1209 05:22:00.998353 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:22:00.998419 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:22:01.037669 1340508 cri.go:89] found id: ""
	I1209 05:22:01.037691 1340508 logs.go:282] 0 containers: []
	W1209 05:22:01.037699 1340508 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:22:01.037713 1340508 logs.go:123] Gathering logs for dmesg ...
	I1209 05:22:01.037725 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:22:01.057988 1340508 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:22:01.058014 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:22:01.139117 1340508 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:22:01.139139 1340508 logs.go:123] Gathering logs for kube-controller-manager [f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da] ...
	I1209 05:22:01.139154 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da"
	I1209 05:22:01.175443 1340508 logs.go:123] Gathering logs for containerd ...
	I1209 05:22:01.175482 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:22:01.207706 1340508 logs.go:123] Gathering logs for container status ...
	I1209 05:22:01.207745 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:22:01.241964 1340508 logs.go:123] Gathering logs for kube-apiserver [7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547] ...
	I1209 05:22:01.241995 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547"
	I1209 05:22:01.280174 1340508 logs.go:123] Gathering logs for etcd [40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b] ...
	I1209 05:22:01.280210 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b"
	I1209 05:22:01.315824 1340508 logs.go:123] Gathering logs for kube-scheduler [ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a] ...
	I1209 05:22:01.315859 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a"
	I1209 05:22:01.347615 1340508 logs.go:123] Gathering logs for kubelet ...
	I1209 05:22:01.347648 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:22:03.909225 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:22:03.919880 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:22:03.919946 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:22:03.953126 1340508 cri.go:89] found id: "7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547"
	I1209 05:22:03.953143 1340508 cri.go:89] found id: ""
	I1209 05:22:03.953151 1340508 logs.go:282] 1 containers: [7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547]
	I1209 05:22:03.953211 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:22:03.957319 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:22:03.957376 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:22:03.989381 1340508 cri.go:89] found id: "40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b"
	I1209 05:22:03.989400 1340508 cri.go:89] found id: ""
	I1209 05:22:03.989408 1340508 logs.go:282] 1 containers: [40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b]
	I1209 05:22:03.989461 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:22:03.993628 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:22:03.993697 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:22:04.024116 1340508 cri.go:89] found id: ""
	I1209 05:22:04.024140 1340508 logs.go:282] 0 containers: []
	W1209 05:22:04.024148 1340508 logs.go:284] No container was found matching "coredns"
	I1209 05:22:04.024154 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:22:04.024214 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:22:04.053055 1340508 cri.go:89] found id: "ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a"
	I1209 05:22:04.053077 1340508 cri.go:89] found id: ""
	I1209 05:22:04.053085 1340508 logs.go:282] 1 containers: [ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a]
	I1209 05:22:04.053143 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:22:04.057553 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:22:04.057625 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:22:04.106335 1340508 cri.go:89] found id: ""
	I1209 05:22:04.106359 1340508 logs.go:282] 0 containers: []
	W1209 05:22:04.106367 1340508 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:22:04.106428 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:22:04.106506 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:22:04.137627 1340508 cri.go:89] found id: "f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da"
	I1209 05:22:04.137648 1340508 cri.go:89] found id: ""
	I1209 05:22:04.137656 1340508 logs.go:282] 1 containers: [f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da]
	I1209 05:22:04.137714 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:22:04.142089 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:22:04.142166 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:22:04.198796 1340508 cri.go:89] found id: ""
	I1209 05:22:04.198819 1340508 logs.go:282] 0 containers: []
	W1209 05:22:04.198827 1340508 logs.go:284] No container was found matching "kindnet"
	I1209 05:22:04.198833 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:22:04.198896 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:22:04.273152 1340508 cri.go:89] found id: ""
	I1209 05:22:04.273173 1340508 logs.go:282] 0 containers: []
	W1209 05:22:04.273181 1340508 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:22:04.273193 1340508 logs.go:123] Gathering logs for kube-scheduler [ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a] ...
	I1209 05:22:04.273205 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a"
	I1209 05:22:04.325344 1340508 logs.go:123] Gathering logs for kube-controller-manager [f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da] ...
	I1209 05:22:04.325419 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da"
	I1209 05:22:04.377900 1340508 logs.go:123] Gathering logs for kubelet ...
	I1209 05:22:04.378053 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:22:04.443332 1340508 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:22:04.443472 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:22:04.561305 1340508 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:22:04.561329 1340508 logs.go:123] Gathering logs for kube-apiserver [7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547] ...
	I1209 05:22:04.561343 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547"
	I1209 05:22:04.616754 1340508 logs.go:123] Gathering logs for containerd ...
	I1209 05:22:04.616832 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:22:04.653195 1340508 logs.go:123] Gathering logs for container status ...
	I1209 05:22:04.653233 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:22:04.687318 1340508 logs.go:123] Gathering logs for dmesg ...
	I1209 05:22:04.687352 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:22:04.705997 1340508 logs.go:123] Gathering logs for etcd [40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b] ...
	I1209 05:22:04.706030 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b"
	I1209 05:22:07.248143 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:22:07.259640 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:22:07.259712 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:22:07.295938 1340508 cri.go:89] found id: "7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547"
	I1209 05:22:07.295954 1340508 cri.go:89] found id: ""
	I1209 05:22:07.295960 1340508 logs.go:282] 1 containers: [7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547]
	I1209 05:22:07.296003 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:22:07.300426 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:22:07.300489 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:22:07.330874 1340508 cri.go:89] found id: "40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b"
	I1209 05:22:07.330893 1340508 cri.go:89] found id: ""
	I1209 05:22:07.330900 1340508 logs.go:282] 1 containers: [40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b]
	I1209 05:22:07.330956 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:22:07.335668 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:22:07.335735 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:22:07.369429 1340508 cri.go:89] found id: ""
	I1209 05:22:07.369451 1340508 logs.go:282] 0 containers: []
	W1209 05:22:07.369459 1340508 logs.go:284] No container was found matching "coredns"
	I1209 05:22:07.369465 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:22:07.369522 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:22:07.398303 1340508 cri.go:89] found id: "ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a"
	I1209 05:22:07.398373 1340508 cri.go:89] found id: ""
	I1209 05:22:07.398395 1340508 logs.go:282] 1 containers: [ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a]
	I1209 05:22:07.398485 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:22:07.403209 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:22:07.403317 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:22:07.433541 1340508 cri.go:89] found id: ""
	I1209 05:22:07.433613 1340508 logs.go:282] 0 containers: []
	W1209 05:22:07.433647 1340508 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:22:07.433669 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:22:07.433765 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:22:07.468644 1340508 cri.go:89] found id: "f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da"
	I1209 05:22:07.468692 1340508 cri.go:89] found id: ""
	I1209 05:22:07.468712 1340508 logs.go:282] 1 containers: [f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da]
	I1209 05:22:07.468790 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:22:07.473045 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:22:07.473175 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:22:07.508678 1340508 cri.go:89] found id: ""
	I1209 05:22:07.508751 1340508 logs.go:282] 0 containers: []
	W1209 05:22:07.508785 1340508 logs.go:284] No container was found matching "kindnet"
	I1209 05:22:07.508810 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:22:07.508904 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:22:07.548486 1340508 cri.go:89] found id: ""
	I1209 05:22:07.548553 1340508 logs.go:282] 0 containers: []
	W1209 05:22:07.548575 1340508 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:22:07.548601 1340508 logs.go:123] Gathering logs for containerd ...
	I1209 05:22:07.548646 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:22:07.581270 1340508 logs.go:123] Gathering logs for kubelet ...
	I1209 05:22:07.581342 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:22:07.662848 1340508 logs.go:123] Gathering logs for dmesg ...
	I1209 05:22:07.663231 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:22:07.681076 1340508 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:22:07.681104 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:22:07.768967 1340508 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:22:07.769037 1340508 logs.go:123] Gathering logs for kube-apiserver [7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547] ...
	I1209 05:22:07.769065 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547"
	I1209 05:22:07.812105 1340508 logs.go:123] Gathering logs for kube-scheduler [ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a] ...
	I1209 05:22:07.812215 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a"
	I1209 05:22:07.870224 1340508 logs.go:123] Gathering logs for kube-controller-manager [f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da] ...
	I1209 05:22:07.870305 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da"
	I1209 05:22:07.921007 1340508 logs.go:123] Gathering logs for container status ...
	I1209 05:22:07.921085 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:22:07.955981 1340508 logs.go:123] Gathering logs for etcd [40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b] ...
	I1209 05:22:07.956068 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b"
	I1209 05:22:10.515073 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:22:10.531389 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:22:10.531460 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:22:10.561098 1340508 cri.go:89] found id: "7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547"
	I1209 05:22:10.561128 1340508 cri.go:89] found id: ""
	I1209 05:22:10.561137 1340508 logs.go:282] 1 containers: [7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547]
	I1209 05:22:10.561199 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:22:10.565159 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:22:10.565232 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:22:10.590381 1340508 cri.go:89] found id: "40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b"
	I1209 05:22:10.590402 1340508 cri.go:89] found id: ""
	I1209 05:22:10.590411 1340508 logs.go:282] 1 containers: [40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b]
	I1209 05:22:10.590470 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:22:10.594262 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:22:10.594335 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:22:10.622509 1340508 cri.go:89] found id: ""
	I1209 05:22:10.622533 1340508 logs.go:282] 0 containers: []
	W1209 05:22:10.622541 1340508 logs.go:284] No container was found matching "coredns"
	I1209 05:22:10.622547 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:22:10.622606 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:22:10.648520 1340508 cri.go:89] found id: "ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a"
	I1209 05:22:10.648542 1340508 cri.go:89] found id: ""
	I1209 05:22:10.648554 1340508 logs.go:282] 1 containers: [ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a]
	I1209 05:22:10.648629 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:22:10.652614 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:22:10.652720 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:22:10.677642 1340508 cri.go:89] found id: ""
	I1209 05:22:10.677666 1340508 logs.go:282] 0 containers: []
	W1209 05:22:10.677674 1340508 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:22:10.677680 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:22:10.677769 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:22:10.702732 1340508 cri.go:89] found id: "f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da"
	I1209 05:22:10.702756 1340508 cri.go:89] found id: ""
	I1209 05:22:10.702765 1340508 logs.go:282] 1 containers: [f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da]
	I1209 05:22:10.702827 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:22:10.706575 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:22:10.706649 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:22:10.735775 1340508 cri.go:89] found id: ""
	I1209 05:22:10.735803 1340508 logs.go:282] 0 containers: []
	W1209 05:22:10.735813 1340508 logs.go:284] No container was found matching "kindnet"
	I1209 05:22:10.735820 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:22:10.735918 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:22:10.762693 1340508 cri.go:89] found id: ""
	I1209 05:22:10.762720 1340508 logs.go:282] 0 containers: []
	W1209 05:22:10.762728 1340508 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:22:10.762744 1340508 logs.go:123] Gathering logs for etcd [40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b] ...
	I1209 05:22:10.762757 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b"
	I1209 05:22:10.795002 1340508 logs.go:123] Gathering logs for container status ...
	I1209 05:22:10.795035 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:22:10.825318 1340508 logs.go:123] Gathering logs for kubelet ...
	I1209 05:22:10.825345 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:22:10.890571 1340508 logs.go:123] Gathering logs for dmesg ...
	I1209 05:22:10.890600 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:22:10.911436 1340508 logs.go:123] Gathering logs for kube-apiserver [7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547] ...
	I1209 05:22:10.911475 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547"
	I1209 05:22:10.987343 1340508 logs.go:123] Gathering logs for kube-scheduler [ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a] ...
	I1209 05:22:10.987378 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a"
	I1209 05:22:11.053783 1340508 logs.go:123] Gathering logs for kube-controller-manager [f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da] ...
	I1209 05:22:11.053857 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da"
	I1209 05:22:11.101012 1340508 logs.go:123] Gathering logs for containerd ...
	I1209 05:22:11.101087 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:22:11.136911 1340508 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:22:11.136993 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:22:11.216583 1340508 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:22:13.716914 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:22:13.726803 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:22:13.726875 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:22:13.753793 1340508 cri.go:89] found id: "7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547"
	I1209 05:22:13.753816 1340508 cri.go:89] found id: ""
	I1209 05:22:13.753825 1340508 logs.go:282] 1 containers: [7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547]
	I1209 05:22:13.753879 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:22:13.757430 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:22:13.757498 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:22:13.781699 1340508 cri.go:89] found id: "40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b"
	I1209 05:22:13.781723 1340508 cri.go:89] found id: ""
	I1209 05:22:13.781732 1340508 logs.go:282] 1 containers: [40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b]
	I1209 05:22:13.781794 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:22:13.785351 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:22:13.785424 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:22:13.809307 1340508 cri.go:89] found id: ""
	I1209 05:22:13.809329 1340508 logs.go:282] 0 containers: []
	W1209 05:22:13.809337 1340508 logs.go:284] No container was found matching "coredns"
	I1209 05:22:13.809343 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:22:13.809411 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:22:13.833242 1340508 cri.go:89] found id: "ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a"
	I1209 05:22:13.833261 1340508 cri.go:89] found id: ""
	I1209 05:22:13.833269 1340508 logs.go:282] 1 containers: [ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a]
	I1209 05:22:13.833327 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:22:13.837094 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:22:13.837170 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:22:13.862121 1340508 cri.go:89] found id: ""
	I1209 05:22:13.862190 1340508 logs.go:282] 0 containers: []
	W1209 05:22:13.862214 1340508 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:22:13.862227 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:22:13.862288 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:22:13.886974 1340508 cri.go:89] found id: "f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da"
	I1209 05:22:13.886997 1340508 cri.go:89] found id: ""
	I1209 05:22:13.887006 1340508 logs.go:282] 1 containers: [f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da]
	I1209 05:22:13.887061 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:22:13.890686 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:22:13.890757 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:22:13.914690 1340508 cri.go:89] found id: ""
	I1209 05:22:13.914717 1340508 logs.go:282] 0 containers: []
	W1209 05:22:13.914725 1340508 logs.go:284] No container was found matching "kindnet"
	I1209 05:22:13.914731 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:22:13.914790 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:22:13.946858 1340508 cri.go:89] found id: ""
	I1209 05:22:13.946886 1340508 logs.go:282] 0 containers: []
	W1209 05:22:13.946895 1340508 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:22:13.946908 1340508 logs.go:123] Gathering logs for etcd [40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b] ...
	I1209 05:22:13.946920 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b"
	I1209 05:22:13.988390 1340508 logs.go:123] Gathering logs for container status ...
	I1209 05:22:13.988434 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:22:14.018603 1340508 logs.go:123] Gathering logs for kubelet ...
	I1209 05:22:14.018632 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:22:14.077256 1340508 logs.go:123] Gathering logs for kube-scheduler [ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a] ...
	I1209 05:22:14.077290 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a"
	I1209 05:22:14.110111 1340508 logs.go:123] Gathering logs for kube-controller-manager [f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da] ...
	I1209 05:22:14.110140 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da"
	I1209 05:22:14.142257 1340508 logs.go:123] Gathering logs for containerd ...
	I1209 05:22:14.142285 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:22:14.170851 1340508 logs.go:123] Gathering logs for dmesg ...
	I1209 05:22:14.170886 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:22:14.187392 1340508 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:22:14.187427 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:22:14.252762 1340508 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:22:14.252796 1340508 logs.go:123] Gathering logs for kube-apiserver [7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547] ...
	I1209 05:22:14.252825 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547"
	I1209 05:22:16.787416 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:22:16.797272 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:22:16.797345 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:22:16.822195 1340508 cri.go:89] found id: "7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547"
	I1209 05:22:16.822216 1340508 cri.go:89] found id: ""
	I1209 05:22:16.822224 1340508 logs.go:282] 1 containers: [7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547]
	I1209 05:22:16.822280 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:22:16.825837 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:22:16.825908 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:22:16.849453 1340508 cri.go:89] found id: "40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b"
	I1209 05:22:16.849474 1340508 cri.go:89] found id: ""
	I1209 05:22:16.849483 1340508 logs.go:282] 1 containers: [40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b]
	I1209 05:22:16.849536 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:22:16.853063 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:22:16.853133 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:22:16.878654 1340508 cri.go:89] found id: ""
	I1209 05:22:16.878680 1340508 logs.go:282] 0 containers: []
	W1209 05:22:16.878689 1340508 logs.go:284] No container was found matching "coredns"
	I1209 05:22:16.878695 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:22:16.878754 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:22:16.902325 1340508 cri.go:89] found id: "ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a"
	I1209 05:22:16.902347 1340508 cri.go:89] found id: ""
	I1209 05:22:16.902356 1340508 logs.go:282] 1 containers: [ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a]
	I1209 05:22:16.902410 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:22:16.906071 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:22:16.906139 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:22:16.931296 1340508 cri.go:89] found id: ""
	I1209 05:22:16.931323 1340508 logs.go:282] 0 containers: []
	W1209 05:22:16.931332 1340508 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:22:16.931338 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:22:16.931403 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:22:16.958235 1340508 cri.go:89] found id: "f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da"
	I1209 05:22:16.958264 1340508 cri.go:89] found id: ""
	I1209 05:22:16.958273 1340508 logs.go:282] 1 containers: [f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da]
	I1209 05:22:16.958334 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:22:16.962194 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:22:16.962276 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:22:16.987798 1340508 cri.go:89] found id: ""
	I1209 05:22:16.987841 1340508 logs.go:282] 0 containers: []
	W1209 05:22:16.987850 1340508 logs.go:284] No container was found matching "kindnet"
	I1209 05:22:16.987857 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:22:16.987927 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:22:17.013440 1340508 cri.go:89] found id: ""
	I1209 05:22:17.013464 1340508 logs.go:282] 0 containers: []
	W1209 05:22:17.013472 1340508 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:22:17.013485 1340508 logs.go:123] Gathering logs for containerd ...
	I1209 05:22:17.013498 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:22:17.041580 1340508 logs.go:123] Gathering logs for container status ...
	I1209 05:22:17.041619 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:22:17.070641 1340508 logs.go:123] Gathering logs for kubelet ...
	I1209 05:22:17.070672 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:22:17.127909 1340508 logs.go:123] Gathering logs for dmesg ...
	I1209 05:22:17.127947 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:22:17.143742 1340508 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:22:17.143768 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:22:17.206397 1340508 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:22:17.206459 1340508 logs.go:123] Gathering logs for kube-scheduler [ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a] ...
	I1209 05:22:17.206480 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a"
	I1209 05:22:17.240455 1340508 logs.go:123] Gathering logs for kube-controller-manager [f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da] ...
	I1209 05:22:17.240490 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da"
	I1209 05:22:17.268195 1340508 logs.go:123] Gathering logs for kube-apiserver [7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547] ...
	I1209 05:22:17.268223 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547"
	I1209 05:22:17.302170 1340508 logs.go:123] Gathering logs for etcd [40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b] ...
	I1209 05:22:17.302201 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b"
	I1209 05:22:19.836967 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:22:19.846347 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:22:19.846417 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:22:19.870073 1340508 cri.go:89] found id: "7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547"
	I1209 05:22:19.870095 1340508 cri.go:89] found id: ""
	I1209 05:22:19.870103 1340508 logs.go:282] 1 containers: [7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547]
	I1209 05:22:19.870159 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:22:19.873737 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:22:19.873805 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:22:19.898976 1340508 cri.go:89] found id: "40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b"
	I1209 05:22:19.898998 1340508 cri.go:89] found id: ""
	I1209 05:22:19.899006 1340508 logs.go:282] 1 containers: [40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b]
	I1209 05:22:19.899066 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:22:19.902679 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:22:19.902752 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:22:19.932152 1340508 cri.go:89] found id: ""
	I1209 05:22:19.932182 1340508 logs.go:282] 0 containers: []
	W1209 05:22:19.932190 1340508 logs.go:284] No container was found matching "coredns"
	I1209 05:22:19.932196 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:22:19.932288 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:22:19.965342 1340508 cri.go:89] found id: "ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a"
	I1209 05:22:19.965363 1340508 cri.go:89] found id: ""
	I1209 05:22:19.965372 1340508 logs.go:282] 1 containers: [ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a]
	I1209 05:22:19.965427 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:22:19.970716 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:22:19.970785 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:22:20.006152 1340508 cri.go:89] found id: ""
	I1209 05:22:20.006180 1340508 logs.go:282] 0 containers: []
	W1209 05:22:20.006189 1340508 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:22:20.006196 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:22:20.006271 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:22:20.038948 1340508 cri.go:89] found id: "f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da"
	I1209 05:22:20.039028 1340508 cri.go:89] found id: ""
	I1209 05:22:20.039052 1340508 logs.go:282] 1 containers: [f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da]
	I1209 05:22:20.039143 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:22:20.043067 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:22:20.043140 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:22:20.067586 1340508 cri.go:89] found id: ""
	I1209 05:22:20.067610 1340508 logs.go:282] 0 containers: []
	W1209 05:22:20.067619 1340508 logs.go:284] No container was found matching "kindnet"
	I1209 05:22:20.067625 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:22:20.067689 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:22:20.094988 1340508 cri.go:89] found id: ""
	I1209 05:22:20.095011 1340508 logs.go:282] 0 containers: []
	W1209 05:22:20.095020 1340508 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:22:20.095037 1340508 logs.go:123] Gathering logs for container status ...
	I1209 05:22:20.095050 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:22:20.125291 1340508 logs.go:123] Gathering logs for kubelet ...
	I1209 05:22:20.125319 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:22:20.183234 1340508 logs.go:123] Gathering logs for kube-apiserver [7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547] ...
	I1209 05:22:20.183272 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547"
	I1209 05:22:20.217728 1340508 logs.go:123] Gathering logs for kube-scheduler [ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a] ...
	I1209 05:22:20.217764 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a"
	I1209 05:22:20.249164 1340508 logs.go:123] Gathering logs for dmesg ...
	I1209 05:22:20.249195 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:22:20.265040 1340508 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:22:20.265067 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:22:20.332941 1340508 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:22:20.332960 1340508 logs.go:123] Gathering logs for etcd [40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b] ...
	I1209 05:22:20.332973 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b"
	I1209 05:22:20.365895 1340508 logs.go:123] Gathering logs for kube-controller-manager [f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da] ...
	I1209 05:22:20.365926 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da"
	I1209 05:22:20.400064 1340508 logs.go:123] Gathering logs for containerd ...
	I1209 05:22:20.400091 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:22:22.930188 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:22:22.942699 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:22:22.942777 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:22:22.976624 1340508 cri.go:89] found id: "7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547"
	I1209 05:22:22.976644 1340508 cri.go:89] found id: ""
	I1209 05:22:22.976667 1340508 logs.go:282] 1 containers: [7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547]
	I1209 05:22:22.976732 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:22:22.982359 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:22:22.982438 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:22:23.020491 1340508 cri.go:89] found id: "40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b"
	I1209 05:22:23.020549 1340508 cri.go:89] found id: ""
	I1209 05:22:23.020569 1340508 logs.go:282] 1 containers: [40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b]
	I1209 05:22:23.020640 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:22:23.024256 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:22:23.024331 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:22:23.050199 1340508 cri.go:89] found id: ""
	I1209 05:22:23.050221 1340508 logs.go:282] 0 containers: []
	W1209 05:22:23.050230 1340508 logs.go:284] No container was found matching "coredns"
	I1209 05:22:23.050236 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:22:23.050294 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:22:23.076498 1340508 cri.go:89] found id: "ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a"
	I1209 05:22:23.076522 1340508 cri.go:89] found id: ""
	I1209 05:22:23.076530 1340508 logs.go:282] 1 containers: [ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a]
	I1209 05:22:23.076589 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:22:23.080478 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:22:23.080559 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:22:23.105945 1340508 cri.go:89] found id: ""
	I1209 05:22:23.105972 1340508 logs.go:282] 0 containers: []
	W1209 05:22:23.105987 1340508 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:22:23.105994 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:22:23.106053 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:22:23.133198 1340508 cri.go:89] found id: "f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da"
	I1209 05:22:23.133220 1340508 cri.go:89] found id: ""
	I1209 05:22:23.133228 1340508 logs.go:282] 1 containers: [f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da]
	I1209 05:22:23.133289 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:22:23.137016 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:22:23.137102 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:22:23.162098 1340508 cri.go:89] found id: ""
	I1209 05:22:23.162127 1340508 logs.go:282] 0 containers: []
	W1209 05:22:23.162135 1340508 logs.go:284] No container was found matching "kindnet"
	I1209 05:22:23.162142 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:22:23.162208 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:22:23.186209 1340508 cri.go:89] found id: ""
	I1209 05:22:23.186235 1340508 logs.go:282] 0 containers: []
	W1209 05:22:23.186243 1340508 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:22:23.186257 1340508 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:22:23.186268 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:22:23.251501 1340508 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:22:23.251524 1340508 logs.go:123] Gathering logs for kube-scheduler [ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a] ...
	I1209 05:22:23.251537 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a"
	I1209 05:22:23.283445 1340508 logs.go:123] Gathering logs for containerd ...
	I1209 05:22:23.283482 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:22:23.311505 1340508 logs.go:123] Gathering logs for container status ...
	I1209 05:22:23.311542 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:22:23.340772 1340508 logs.go:123] Gathering logs for kubelet ...
	I1209 05:22:23.340799 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:22:23.398394 1340508 logs.go:123] Gathering logs for dmesg ...
	I1209 05:22:23.398429 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:22:23.415038 1340508 logs.go:123] Gathering logs for kube-apiserver [7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547] ...
	I1209 05:22:23.415065 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547"
	I1209 05:22:23.448867 1340508 logs.go:123] Gathering logs for etcd [40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b] ...
	I1209 05:22:23.448900 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b"
	I1209 05:22:23.480596 1340508 logs.go:123] Gathering logs for kube-controller-manager [f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da] ...
	I1209 05:22:23.480627 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da"
	I1209 05:22:26.017810 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:22:26.028954 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:22:26.029028 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:22:26.058473 1340508 cri.go:89] found id: "7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547"
	I1209 05:22:26.058496 1340508 cri.go:89] found id: ""
	I1209 05:22:26.058504 1340508 logs.go:282] 1 containers: [7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547]
	I1209 05:22:26.058562 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:22:26.062410 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:22:26.062487 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:22:26.089299 1340508 cri.go:89] found id: "40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b"
	I1209 05:22:26.089322 1340508 cri.go:89] found id: ""
	I1209 05:22:26.089330 1340508 logs.go:282] 1 containers: [40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b]
	I1209 05:22:26.089386 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:22:26.093614 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:22:26.093688 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:22:26.118547 1340508 cri.go:89] found id: ""
	I1209 05:22:26.118572 1340508 logs.go:282] 0 containers: []
	W1209 05:22:26.118580 1340508 logs.go:284] No container was found matching "coredns"
	I1209 05:22:26.118586 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:22:26.118648 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:22:26.146758 1340508 cri.go:89] found id: "ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a"
	I1209 05:22:26.146785 1340508 cri.go:89] found id: ""
	I1209 05:22:26.146793 1340508 logs.go:282] 1 containers: [ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a]
	I1209 05:22:26.146852 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:22:26.150577 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:22:26.150659 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:22:26.175051 1340508 cri.go:89] found id: ""
	I1209 05:22:26.175080 1340508 logs.go:282] 0 containers: []
	W1209 05:22:26.175087 1340508 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:22:26.175094 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:22:26.175156 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:22:26.199798 1340508 cri.go:89] found id: "f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da"
	I1209 05:22:26.199823 1340508 cri.go:89] found id: ""
	I1209 05:22:26.199831 1340508 logs.go:282] 1 containers: [f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da]
	I1209 05:22:26.199886 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:22:26.203361 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:22:26.203434 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:22:26.229964 1340508 cri.go:89] found id: ""
	I1209 05:22:26.229996 1340508 logs.go:282] 0 containers: []
	W1209 05:22:26.230005 1340508 logs.go:284] No container was found matching "kindnet"
	I1209 05:22:26.230012 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:22:26.230097 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:22:26.256707 1340508 cri.go:89] found id: ""
	I1209 05:22:26.256783 1340508 logs.go:282] 0 containers: []
	W1209 05:22:26.256805 1340508 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:22:26.256831 1340508 logs.go:123] Gathering logs for kubelet ...
	I1209 05:22:26.256847 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:22:26.313423 1340508 logs.go:123] Gathering logs for dmesg ...
	I1209 05:22:26.313456 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:22:26.329415 1340508 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:22:26.329445 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:22:26.399912 1340508 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:22:26.399936 1340508 logs.go:123] Gathering logs for etcd [40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b] ...
	I1209 05:22:26.399949 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b"
	I1209 05:22:26.431314 1340508 logs.go:123] Gathering logs for kube-controller-manager [f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da] ...
	I1209 05:22:26.431346 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da"
	I1209 05:22:26.459749 1340508 logs.go:123] Gathering logs for containerd ...
	I1209 05:22:26.459778 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:22:26.488590 1340508 logs.go:123] Gathering logs for kube-apiserver [7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547] ...
	I1209 05:22:26.488621 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547"
	I1209 05:22:26.527795 1340508 logs.go:123] Gathering logs for kube-scheduler [ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a] ...
	I1209 05:22:26.527825 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a"
	I1209 05:22:26.571894 1340508 logs.go:123] Gathering logs for container status ...
	I1209 05:22:26.571926 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:22:29.117849 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:22:29.128246 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:22:29.128316 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:22:29.153667 1340508 cri.go:89] found id: "7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547"
	I1209 05:22:29.153686 1340508 cri.go:89] found id: ""
	I1209 05:22:29.153694 1340508 logs.go:282] 1 containers: [7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547]
	I1209 05:22:29.153750 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:22:29.157398 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:22:29.157469 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:22:29.182068 1340508 cri.go:89] found id: "40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b"
	I1209 05:22:29.182087 1340508 cri.go:89] found id: ""
	I1209 05:22:29.182097 1340508 logs.go:282] 1 containers: [40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b]
	I1209 05:22:29.182153 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:22:29.185729 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:22:29.185807 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:22:29.216205 1340508 cri.go:89] found id: ""
	I1209 05:22:29.216227 1340508 logs.go:282] 0 containers: []
	W1209 05:22:29.216235 1340508 logs.go:284] No container was found matching "coredns"
	I1209 05:22:29.216241 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:22:29.216299 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:22:29.240182 1340508 cri.go:89] found id: "ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a"
	I1209 05:22:29.240200 1340508 cri.go:89] found id: ""
	I1209 05:22:29.240208 1340508 logs.go:282] 1 containers: [ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a]
	I1209 05:22:29.240272 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:22:29.243755 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:22:29.243823 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:22:29.268089 1340508 cri.go:89] found id: ""
	I1209 05:22:29.268118 1340508 logs.go:282] 0 containers: []
	W1209 05:22:29.268127 1340508 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:22:29.268133 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:22:29.268198 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:22:29.293436 1340508 cri.go:89] found id: "f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da"
	I1209 05:22:29.293459 1340508 cri.go:89] found id: ""
	I1209 05:22:29.293468 1340508 logs.go:282] 1 containers: [f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da]
	I1209 05:22:29.293535 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:22:29.297252 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:22:29.297322 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:22:29.321522 1340508 cri.go:89] found id: ""
	I1209 05:22:29.321599 1340508 logs.go:282] 0 containers: []
	W1209 05:22:29.321622 1340508 logs.go:284] No container was found matching "kindnet"
	I1209 05:22:29.321641 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:22:29.321705 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:22:29.345550 1340508 cri.go:89] found id: ""
	I1209 05:22:29.345574 1340508 logs.go:282] 0 containers: []
	W1209 05:22:29.345582 1340508 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:22:29.345598 1340508 logs.go:123] Gathering logs for kube-scheduler [ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a] ...
	I1209 05:22:29.345609 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a"
	I1209 05:22:29.376284 1340508 logs.go:123] Gathering logs for kube-controller-manager [f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da] ...
	I1209 05:22:29.376354 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da"
	I1209 05:22:29.404704 1340508 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:22:29.404730 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:22:29.467803 1340508 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:22:29.467828 1340508 logs.go:123] Gathering logs for kube-apiserver [7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547] ...
	I1209 05:22:29.467841 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547"
	I1209 05:22:29.502692 1340508 logs.go:123] Gathering logs for containerd ...
	I1209 05:22:29.502730 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:22:29.539848 1340508 logs.go:123] Gathering logs for container status ...
	I1209 05:22:29.539885 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:22:29.569923 1340508 logs.go:123] Gathering logs for kubelet ...
	I1209 05:22:29.569948 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:22:29.629631 1340508 logs.go:123] Gathering logs for dmesg ...
	I1209 05:22:29.629674 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:22:29.646137 1340508 logs.go:123] Gathering logs for etcd [40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b] ...
	I1209 05:22:29.646170 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b"
	I1209 05:22:32.179500 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:22:32.189731 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:22:32.189799 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:22:32.223598 1340508 cri.go:89] found id: "7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547"
	I1209 05:22:32.223622 1340508 cri.go:89] found id: ""
	I1209 05:22:32.223630 1340508 logs.go:282] 1 containers: [7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547]
	I1209 05:22:32.223685 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:22:32.228703 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:22:32.228776 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:22:32.263246 1340508 cri.go:89] found id: "40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b"
	I1209 05:22:32.263264 1340508 cri.go:89] found id: ""
	I1209 05:22:32.263273 1340508 logs.go:282] 1 containers: [40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b]
	I1209 05:22:32.263331 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:22:32.267573 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:22:32.267693 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:22:32.305123 1340508 cri.go:89] found id: ""
	I1209 05:22:32.305147 1340508 logs.go:282] 0 containers: []
	W1209 05:22:32.305155 1340508 logs.go:284] No container was found matching "coredns"
	I1209 05:22:32.305182 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:22:32.305246 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:22:32.334651 1340508 cri.go:89] found id: "ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a"
	I1209 05:22:32.334722 1340508 cri.go:89] found id: ""
	I1209 05:22:32.334744 1340508 logs.go:282] 1 containers: [ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a]
	I1209 05:22:32.334830 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:22:32.338498 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:22:32.338570 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:22:32.362412 1340508 cri.go:89] found id: ""
	I1209 05:22:32.362437 1340508 logs.go:282] 0 containers: []
	W1209 05:22:32.362446 1340508 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:22:32.362452 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:22:32.362510 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:22:32.392182 1340508 cri.go:89] found id: "f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da"
	I1209 05:22:32.392245 1340508 cri.go:89] found id: ""
	I1209 05:22:32.392278 1340508 logs.go:282] 1 containers: [f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da]
	I1209 05:22:32.392365 1340508 ssh_runner.go:195] Run: which crictl
	I1209 05:22:32.396441 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:22:32.396568 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:22:32.421435 1340508 cri.go:89] found id: ""
	I1209 05:22:32.421458 1340508 logs.go:282] 0 containers: []
	W1209 05:22:32.421466 1340508 logs.go:284] No container was found matching "kindnet"
	I1209 05:22:32.421473 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:22:32.421532 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:22:32.447070 1340508 cri.go:89] found id: ""
	I1209 05:22:32.447095 1340508 logs.go:282] 0 containers: []
	W1209 05:22:32.447103 1340508 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:22:32.447116 1340508 logs.go:123] Gathering logs for container status ...
	I1209 05:22:32.447127 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:22:32.483181 1340508 logs.go:123] Gathering logs for kube-scheduler [ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a] ...
	I1209 05:22:32.483216 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a"
	I1209 05:22:32.515845 1340508 logs.go:123] Gathering logs for containerd ...
	I1209 05:22:32.515878 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:22:32.547056 1340508 logs.go:123] Gathering logs for kubelet ...
	I1209 05:22:32.547090 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:22:32.609008 1340508 logs.go:123] Gathering logs for dmesg ...
	I1209 05:22:32.609044 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:22:32.625189 1340508 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:22:32.625219 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:22:32.693988 1340508 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:22:32.694018 1340508 logs.go:123] Gathering logs for kube-apiserver [7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547] ...
	I1209 05:22:32.694031 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547"
	I1209 05:22:32.756351 1340508 logs.go:123] Gathering logs for etcd [40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b] ...
	I1209 05:22:32.756385 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b"
	I1209 05:22:32.789464 1340508 logs.go:123] Gathering logs for kube-controller-manager [f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da] ...
	I1209 05:22:32.789494 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da"
	I1209 05:22:35.320942 1340508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:22:35.332237 1340508 kubeadm.go:602] duration metric: took 4m2.735894814s to restartPrimaryControlPlane
	W1209 05:22:35.332305 1340508 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1209 05:22:35.332364 1340508 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1209 05:22:35.843862 1340508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 05:22:35.857198 1340508 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 05:22:35.867153 1340508 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1209 05:22:35.867219 1340508 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 05:22:35.875644 1340508 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 05:22:35.875664 1340508 kubeadm.go:158] found existing configuration files:
	
	I1209 05:22:35.875719 1340508 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1209 05:22:35.884140 1340508 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 05:22:35.884205 1340508 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 05:22:35.892708 1340508 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1209 05:22:35.900259 1340508 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 05:22:35.900326 1340508 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 05:22:35.908041 1340508 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1209 05:22:35.915526 1340508 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 05:22:35.915601 1340508 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 05:22:35.923067 1340508 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1209 05:22:35.931097 1340508 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 05:22:35.931170 1340508 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 05:22:35.938802 1340508 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1209 05:22:35.985567 1340508 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1209 05:22:35.985632 1340508 kubeadm.go:319] [preflight] Running pre-flight checks
	I1209 05:22:36.063049 1340508 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1209 05:22:36.063126 1340508 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1209 05:22:36.063168 1340508 kubeadm.go:319] OS: Linux
	I1209 05:22:36.063217 1340508 kubeadm.go:319] CGROUPS_CPU: enabled
	I1209 05:22:36.063269 1340508 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1209 05:22:36.063320 1340508 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1209 05:22:36.063373 1340508 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1209 05:22:36.063424 1340508 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1209 05:22:36.063484 1340508 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1209 05:22:36.063534 1340508 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1209 05:22:36.063587 1340508 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1209 05:22:36.063640 1340508 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1209 05:22:36.136558 1340508 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1209 05:22:36.136674 1340508 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1209 05:22:36.136770 1340508 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1209 05:22:46.337931 1340508 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1209 05:22:46.340846 1340508 out.go:252]   - Generating certificates and keys ...
	I1209 05:22:46.340950 1340508 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1209 05:22:46.341053 1340508 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1209 05:22:46.341149 1340508 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1209 05:22:46.341422 1340508 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1209 05:22:46.342162 1340508 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1209 05:22:46.343616 1340508 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1209 05:22:46.344451 1340508 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1209 05:22:46.345293 1340508 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1209 05:22:46.345704 1340508 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1209 05:22:46.346588 1340508 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1209 05:22:46.347375 1340508 kubeadm.go:319] [certs] Using the existing "sa" key
	I1209 05:22:46.347442 1340508 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1209 05:22:46.586884 1340508 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1209 05:22:47.064582 1340508 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1209 05:22:47.285421 1340508 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1209 05:22:47.379708 1340508 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1209 05:22:47.525462 1340508 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1209 05:22:47.526186 1340508 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1209 05:22:47.528887 1340508 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1209 05:22:47.532784 1340508 out.go:252]   - Booting up control plane ...
	I1209 05:22:47.532889 1340508 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1209 05:22:47.532967 1340508 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1209 05:22:47.533034 1340508 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1209 05:22:47.554582 1340508 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1209 05:22:47.554699 1340508 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1209 05:22:47.562753 1340508 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1209 05:22:47.564780 1340508 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1209 05:22:47.564855 1340508 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1209 05:22:47.697738 1340508 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1209 05:22:47.697863 1340508 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1209 05:26:47.697826 1340508 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000146222s
	I1209 05:26:47.698120 1340508 kubeadm.go:319] 
	I1209 05:26:47.698198 1340508 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1209 05:26:47.698232 1340508 kubeadm.go:319] 	- The kubelet is not running
	I1209 05:26:47.698337 1340508 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1209 05:26:47.698343 1340508 kubeadm.go:319] 
	I1209 05:26:47.698447 1340508 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1209 05:26:47.698479 1340508 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1209 05:26:47.698510 1340508 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1209 05:26:47.698514 1340508 kubeadm.go:319] 
	I1209 05:26:47.703648 1340508 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1209 05:26:47.704090 1340508 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1209 05:26:47.704199 1340508 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1209 05:26:47.704461 1340508 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1209 05:26:47.704467 1340508 kubeadm.go:319] 
	I1209 05:26:47.704535 1340508 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1209 05:26:47.704638 1340508 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000146222s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000146222s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1209 05:26:47.704709 1340508 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1209 05:26:48.138351 1340508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 05:26:48.151587 1340508 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1209 05:26:48.151650 1340508 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 05:26:48.159467 1340508 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 05:26:48.159488 1340508 kubeadm.go:158] found existing configuration files:
	
	I1209 05:26:48.159539 1340508 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1209 05:26:48.167322 1340508 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 05:26:48.167386 1340508 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 05:26:48.174486 1340508 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1209 05:26:48.182006 1340508 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 05:26:48.182072 1340508 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 05:26:48.189247 1340508 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1209 05:26:48.196719 1340508 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 05:26:48.196782 1340508 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 05:26:48.204095 1340508 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1209 05:26:48.211398 1340508 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 05:26:48.211461 1340508 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 05:26:48.218785 1340508 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1209 05:26:48.257959 1340508 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1209 05:26:48.258023 1340508 kubeadm.go:319] [preflight] Running pre-flight checks
	I1209 05:26:48.324700 1340508 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1209 05:26:48.324810 1340508 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1209 05:26:48.324851 1340508 kubeadm.go:319] OS: Linux
	I1209 05:26:48.324912 1340508 kubeadm.go:319] CGROUPS_CPU: enabled
	I1209 05:26:48.324978 1340508 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1209 05:26:48.325038 1340508 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1209 05:26:48.325099 1340508 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1209 05:26:48.325170 1340508 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1209 05:26:48.325232 1340508 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1209 05:26:48.325290 1340508 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1209 05:26:48.325350 1340508 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1209 05:26:48.325414 1340508 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1209 05:26:48.398536 1340508 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1209 05:26:48.398655 1340508 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1209 05:26:48.398746 1340508 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1209 05:26:48.411018 1340508 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1209 05:26:48.414743 1340508 out.go:252]   - Generating certificates and keys ...
	I1209 05:26:48.414831 1340508 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1209 05:26:48.414895 1340508 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1209 05:26:48.414967 1340508 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1209 05:26:48.415025 1340508 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1209 05:26:48.415092 1340508 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1209 05:26:48.415143 1340508 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1209 05:26:48.415202 1340508 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1209 05:26:48.415260 1340508 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1209 05:26:48.415329 1340508 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1209 05:26:48.415397 1340508 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1209 05:26:48.415433 1340508 kubeadm.go:319] [certs] Using the existing "sa" key
	I1209 05:26:48.415485 1340508 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1209 05:26:48.881686 1340508 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1209 05:26:49.203175 1340508 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1209 05:26:49.611611 1340508 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1209 05:26:49.709338 1340508 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1209 05:26:49.879946 1340508 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1209 05:26:49.880073 1340508 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1209 05:26:49.880142 1340508 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1209 05:26:49.883455 1340508 out.go:252]   - Booting up control plane ...
	I1209 05:26:49.883566 1340508 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1209 05:26:49.883648 1340508 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1209 05:26:49.885428 1340508 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1209 05:26:49.906885 1340508 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1209 05:26:49.906990 1340508 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1209 05:26:49.916567 1340508 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1209 05:26:49.916881 1340508 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1209 05:26:49.917100 1340508 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1209 05:26:50.052591 1340508 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1209 05:26:50.052706 1340508 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1209 05:30:50.050398 1340508 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000409812s
	I1209 05:30:50.050683 1340508 kubeadm.go:319] 
	I1209 05:30:50.050753 1340508 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1209 05:30:50.050787 1340508 kubeadm.go:319] 	- The kubelet is not running
	I1209 05:30:50.050892 1340508 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1209 05:30:50.050898 1340508 kubeadm.go:319] 
	I1209 05:30:50.051002 1340508 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1209 05:30:50.051034 1340508 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1209 05:30:50.051065 1340508 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1209 05:30:50.051069 1340508 kubeadm.go:319] 
	I1209 05:30:50.054925 1340508 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1209 05:30:50.055367 1340508 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1209 05:30:50.055479 1340508 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1209 05:30:50.055716 1340508 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1209 05:30:50.055723 1340508 kubeadm.go:319] 
	I1209 05:30:50.055792 1340508 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1209 05:30:50.055845 1340508 kubeadm.go:403] duration metric: took 12m17.527665319s to StartCluster
	I1209 05:30:50.055878 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:30:50.055939 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:30:50.090018 1340508 cri.go:89] found id: ""
	I1209 05:30:50.090041 1340508 logs.go:282] 0 containers: []
	W1209 05:30:50.090049 1340508 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:30:50.090055 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:30:50.090116 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:30:50.122670 1340508 cri.go:89] found id: ""
	I1209 05:30:50.122691 1340508 logs.go:282] 0 containers: []
	W1209 05:30:50.122700 1340508 logs.go:284] No container was found matching "etcd"
	I1209 05:30:50.122706 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:30:50.122771 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:30:50.158597 1340508 cri.go:89] found id: ""
	I1209 05:30:50.158617 1340508 logs.go:282] 0 containers: []
	W1209 05:30:50.158626 1340508 logs.go:284] No container was found matching "coredns"
	I1209 05:30:50.158632 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:30:50.158688 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:30:50.194645 1340508 cri.go:89] found id: ""
	I1209 05:30:50.194671 1340508 logs.go:282] 0 containers: []
	W1209 05:30:50.194679 1340508 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:30:50.194689 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:30:50.194764 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:30:50.234640 1340508 cri.go:89] found id: ""
	I1209 05:30:50.234661 1340508 logs.go:282] 0 containers: []
	W1209 05:30:50.234670 1340508 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:30:50.234676 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:30:50.234735 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:30:50.262325 1340508 cri.go:89] found id: ""
	I1209 05:30:50.262406 1340508 logs.go:282] 0 containers: []
	W1209 05:30:50.262445 1340508 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:30:50.262455 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:30:50.262543 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:30:50.294557 1340508 cri.go:89] found id: ""
	I1209 05:30:50.294584 1340508 logs.go:282] 0 containers: []
	W1209 05:30:50.294591 1340508 logs.go:284] No container was found matching "kindnet"
	I1209 05:30:50.294598 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:30:50.294703 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:30:50.327015 1340508 cri.go:89] found id: ""
	I1209 05:30:50.327087 1340508 logs.go:282] 0 containers: []
	W1209 05:30:50.327098 1340508 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:30:50.327183 1340508 logs.go:123] Gathering logs for kubelet ...
	I1209 05:30:50.327225 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:30:50.396351 1340508 logs.go:123] Gathering logs for dmesg ...
	I1209 05:30:50.396435 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:30:50.413877 1340508 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:30:50.413902 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:30:50.557364 1340508 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:30:50.557430 1340508 logs.go:123] Gathering logs for containerd ...
	I1209 05:30:50.557455 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:30:50.610343 1340508 logs.go:123] Gathering logs for container status ...
	I1209 05:30:50.610422 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1209 05:30:50.651163 1340508 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000409812s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1209 05:30:50.651205 1340508 out.go:285] * 
	* 
	W1209 05:30:50.651255 1340508 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000409812s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000409812s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1209 05:30:50.651265 1340508 out.go:285] * 
	* 
	W1209 05:30:50.653400 1340508 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 05:30:50.659043 1340508 out.go:203] 
	W1209 05:30:50.662759 1340508 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000409812s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000409812s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1209 05:30:50.662799 1340508 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1209 05:30:50.662819 1340508 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1209 05:30:50.666095 1340508 out.go:203] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-linux-arm64 start -p kubernetes-upgrade-511751 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd : exit status 109
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-511751 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-511751 version --output=json: exit status 1 (147.143168ms)

                                                
                                                
-- stdout --
	{
	  "clientVersion": {
	    "major": "1",
	    "minor": "33",
	    "gitVersion": "v1.33.2",
	    "gitCommit": "a57b6f7709f6c2722b92f07b8b4c48210a51fc40",
	    "gitTreeState": "clean",
	    "buildDate": "2025-06-17T18:41:31Z",
	    "goVersion": "go1.24.4",
	    "compiler": "gc",
	    "platform": "linux/arm64"
	  },
	  "kustomizeVersion": "v5.6.0"
	}

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.76.2:8443 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:615: *** TestKubernetesUpgrade FAILED at 2025-12-09 05:30:51.992923638 +0000 UTC m=+5013.087461215
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestKubernetesUpgrade]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestKubernetesUpgrade]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect kubernetes-upgrade-511751
helpers_test.go:243: (dbg) docker inspect kubernetes-upgrade-511751:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "43832c3e43fa1e11cb2da26a9e669a1df1c2f5b5f103bf3a00dcfa97e8db6868",
	        "Created": "2025-12-09T05:17:46.783579106Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1340640,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-09T05:18:19.66961568Z",
	            "FinishedAt": "2025-12-09T05:18:18.694304185Z"
	        },
	        "Image": "sha256:e4eb91ed18a24161fce60c7cdd660144ecd5b8c5029dc2dea2c5e423c2f48ce4",
	        "ResolvConfPath": "/var/lib/docker/containers/43832c3e43fa1e11cb2da26a9e669a1df1c2f5b5f103bf3a00dcfa97e8db6868/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/43832c3e43fa1e11cb2da26a9e669a1df1c2f5b5f103bf3a00dcfa97e8db6868/hostname",
	        "HostsPath": "/var/lib/docker/containers/43832c3e43fa1e11cb2da26a9e669a1df1c2f5b5f103bf3a00dcfa97e8db6868/hosts",
	        "LogPath": "/var/lib/docker/containers/43832c3e43fa1e11cb2da26a9e669a1df1c2f5b5f103bf3a00dcfa97e8db6868/43832c3e43fa1e11cb2da26a9e669a1df1c2f5b5f103bf3a00dcfa97e8db6868-json.log",
	        "Name": "/kubernetes-upgrade-511751",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "kubernetes-upgrade-511751:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "kubernetes-upgrade-511751",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "43832c3e43fa1e11cb2da26a9e669a1df1c2f5b5f103bf3a00dcfa97e8db6868",
	                "LowerDir": "/var/lib/docker/overlay2/c7eaae1cc48aaf95e6ed688412dd78c1481b073acf604624f470e693acb8be87-init/diff:/var/lib/docker/overlay2/c44bb57aa59cc265266f37f2bb6e7ec0e7d641c3b4aeaa57e6d23deec6f0d1d4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c7eaae1cc48aaf95e6ed688412dd78c1481b073acf604624f470e693acb8be87/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c7eaae1cc48aaf95e6ed688412dd78c1481b073acf604624f470e693acb8be87/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c7eaae1cc48aaf95e6ed688412dd78c1481b073acf604624f470e693acb8be87/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "kubernetes-upgrade-511751",
	                "Source": "/var/lib/docker/volumes/kubernetes-upgrade-511751/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "kubernetes-upgrade-511751",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "kubernetes-upgrade-511751",
	                "name.minikube.sigs.k8s.io": "kubernetes-upgrade-511751",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8d0b30cf02b704d91228b02989ec1d4643f10fb0495d429462bfebe454d031f1",
	            "SandboxKey": "/var/run/docker/netns/8d0b30cf02b7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34125"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34126"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34129"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34127"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34128"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "kubernetes-upgrade-511751": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "36:37:1c:d3:87:05",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "3a054111a305247e4c605a0baaca01599e49da693311b50ce62f09aec0000542",
	                    "EndpointID": "e4145d9754e8af0221be47baf025c42aca292de4308c06e81b75ca8db588141c",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "kubernetes-upgrade-511751",
	                        "43832c3e43fa"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p kubernetes-upgrade-511751 -n kubernetes-upgrade-511751
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p kubernetes-upgrade-511751 -n kubernetes-upgrade-511751: exit status 2 (419.840603ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-511751 logs -n 25
helpers_test.go:260: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                         ARGS                                                                          │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p NoKubernetes-284947 sudo systemctl is-active --quiet service kubelet                                                                               │ NoKubernetes-284947       │ jenkins │ v1.37.0 │ 09 Dec 25 05:17 UTC │                     │
	│ stop    │ -p NoKubernetes-284947                                                                                                                                │ NoKubernetes-284947       │ jenkins │ v1.37.0 │ 09 Dec 25 05:17 UTC │ 09 Dec 25 05:17 UTC │
	│ start   │ -p NoKubernetes-284947 --driver=docker  --container-runtime=containerd                                                                                │ NoKubernetes-284947       │ jenkins │ v1.37.0 │ 09 Dec 25 05:17 UTC │ 09 Dec 25 05:17 UTC │
	│ start   │ -p missing-upgrade-253761 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd                                        │ missing-upgrade-253761    │ jenkins │ v1.37.0 │ 09 Dec 25 05:17 UTC │ 09 Dec 25 05:19 UTC │
	│ ssh     │ -p NoKubernetes-284947 sudo systemctl is-active --quiet service kubelet                                                                               │ NoKubernetes-284947       │ jenkins │ v1.37.0 │ 09 Dec 25 05:17 UTC │                     │
	│ delete  │ -p NoKubernetes-284947                                                                                                                                │ NoKubernetes-284947       │ jenkins │ v1.37.0 │ 09 Dec 25 05:17 UTC │ 09 Dec 25 05:17 UTC │
	│ start   │ -p kubernetes-upgrade-511751 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd        │ kubernetes-upgrade-511751 │ jenkins │ v1.37.0 │ 09 Dec 25 05:17 UTC │ 09 Dec 25 05:18 UTC │
	│ stop    │ -p kubernetes-upgrade-511751                                                                                                                          │ kubernetes-upgrade-511751 │ jenkins │ v1.37.0 │ 09 Dec 25 05:18 UTC │ 09 Dec 25 05:18 UTC │
	│ start   │ -p kubernetes-upgrade-511751 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd │ kubernetes-upgrade-511751 │ jenkins │ v1.37.0 │ 09 Dec 25 05:18 UTC │                     │
	│ delete  │ -p missing-upgrade-253761                                                                                                                             │ missing-upgrade-253761    │ jenkins │ v1.37.0 │ 09 Dec 25 05:19 UTC │ 09 Dec 25 05:19 UTC │
	│ start   │ -p stopped-upgrade-774042 --memory=3072 --vm-driver=docker  --container-runtime=containerd                                                            │ stopped-upgrade-774042    │ jenkins │ v1.35.0 │ 09 Dec 25 05:19 UTC │ 09 Dec 25 05:19 UTC │
	│ stop    │ stopped-upgrade-774042 stop                                                                                                                           │ stopped-upgrade-774042    │ jenkins │ v1.35.0 │ 09 Dec 25 05:19 UTC │ 09 Dec 25 05:19 UTC │
	│ start   │ -p stopped-upgrade-774042 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd                                        │ stopped-upgrade-774042    │ jenkins │ v1.37.0 │ 09 Dec 25 05:19 UTC │ 09 Dec 25 05:24 UTC │
	│ delete  │ -p stopped-upgrade-774042                                                                                                                             │ stopped-upgrade-774042    │ jenkins │ v1.37.0 │ 09 Dec 25 05:24 UTC │ 09 Dec 25 05:24 UTC │
	│ start   │ -p running-upgrade-571370 --memory=3072 --vm-driver=docker  --container-runtime=containerd                                                            │ running-upgrade-571370    │ jenkins │ v1.35.0 │ 09 Dec 25 05:24 UTC │ 09 Dec 25 05:24 UTC │
	│ start   │ -p running-upgrade-571370 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd                                        │ running-upgrade-571370    │ jenkins │ v1.37.0 │ 09 Dec 25 05:24 UTC │ 09 Dec 25 05:29 UTC │
	│ delete  │ -p running-upgrade-571370                                                                                                                             │ running-upgrade-571370    │ jenkins │ v1.37.0 │ 09 Dec 25 05:29 UTC │ 09 Dec 25 05:29 UTC │
	│ start   │ -p pause-523987 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd                                       │ pause-523987              │ jenkins │ v1.37.0 │ 09 Dec 25 05:29 UTC │ 09 Dec 25 05:30 UTC │
	│ start   │ -p pause-523987 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd                                                                │ pause-523987              │ jenkins │ v1.37.0 │ 09 Dec 25 05:30 UTC │ 09 Dec 25 05:30 UTC │
	│ pause   │ -p pause-523987 --alsologtostderr -v=5                                                                                                                │ pause-523987              │ jenkins │ v1.37.0 │ 09 Dec 25 05:30 UTC │ 09 Dec 25 05:30 UTC │
	│ unpause │ -p pause-523987 --alsologtostderr -v=5                                                                                                                │ pause-523987              │ jenkins │ v1.37.0 │ 09 Dec 25 05:30 UTC │ 09 Dec 25 05:30 UTC │
	│ pause   │ -p pause-523987 --alsologtostderr -v=5                                                                                                                │ pause-523987              │ jenkins │ v1.37.0 │ 09 Dec 25 05:30 UTC │ 09 Dec 25 05:30 UTC │
	│ delete  │ -p pause-523987 --alsologtostderr -v=5                                                                                                                │ pause-523987              │ jenkins │ v1.37.0 │ 09 Dec 25 05:30 UTC │ 09 Dec 25 05:30 UTC │
	│ delete  │ -p pause-523987                                                                                                                                       │ pause-523987              │ jenkins │ v1.37.0 │ 09 Dec 25 05:30 UTC │ 09 Dec 25 05:30 UTC │
	│ start   │ -p force-systemd-flag-288240 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd                     │ force-systemd-flag-288240 │ jenkins │ v1.37.0 │ 09 Dec 25 05:30 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/09 05:30:37
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 05:30:37.730412 1381231 out.go:360] Setting OutFile to fd 1 ...
	I1209 05:30:37.730539 1381231 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 05:30:37.730548 1381231 out.go:374] Setting ErrFile to fd 2...
	I1209 05:30:37.730553 1381231 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 05:30:37.730794 1381231 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-1142328/.minikube/bin
	I1209 05:30:37.731192 1381231 out.go:368] Setting JSON to false
	I1209 05:30:37.732093 1381231 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":29561,"bootTime":1765228677,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1209 05:30:37.732165 1381231 start.go:143] virtualization:  
	I1209 05:30:37.736035 1381231 out.go:179] * [force-systemd-flag-288240] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1209 05:30:37.739332 1381231 notify.go:221] Checking for updates...
	I1209 05:30:37.740059 1381231 out.go:179]   - MINIKUBE_LOCATION=22081
	I1209 05:30:37.743325 1381231 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 05:30:37.746647 1381231 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22081-1142328/kubeconfig
	I1209 05:30:37.749813 1381231 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-1142328/.minikube
	I1209 05:30:37.752893 1381231 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1209 05:30:37.755732 1381231 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 05:30:37.759177 1381231 config.go:182] Loaded profile config "kubernetes-upgrade-511751": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1209 05:30:37.759283 1381231 driver.go:422] Setting default libvirt URI to qemu:///system
	I1209 05:30:37.787833 1381231 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1209 05:30:37.787947 1381231 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 05:30:37.852667 1381231 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-09 05:30:37.843783817 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1209 05:30:37.852776 1381231 docker.go:319] overlay module found
	I1209 05:30:37.855875 1381231 out.go:179] * Using the docker driver based on user configuration
	I1209 05:30:37.858900 1381231 start.go:309] selected driver: docker
	I1209 05:30:37.858917 1381231 start.go:927] validating driver "docker" against <nil>
	I1209 05:30:37.858930 1381231 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 05:30:37.859675 1381231 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 05:30:37.917988 1381231 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-09 05:30:37.908420106 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1209 05:30:37.918161 1381231 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1209 05:30:37.918383 1381231 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1209 05:30:37.921374 1381231 out.go:179] * Using Docker driver with root privileges
	I1209 05:30:37.924165 1381231 cni.go:84] Creating CNI manager for ""
	I1209 05:30:37.924237 1381231 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1209 05:30:37.924250 1381231 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1209 05:30:37.924327 1381231 start.go:353] cluster config:
	{Name:force-systemd-flag-288240 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:force-systemd-flag-288240 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 05:30:37.927459 1381231 out.go:179] * Starting "force-systemd-flag-288240" primary control-plane node in "force-systemd-flag-288240" cluster
	I1209 05:30:37.930267 1381231 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1209 05:30:37.933185 1381231 out.go:179] * Pulling base image v0.0.48-1765184860-22066 ...
	I1209 05:30:37.936135 1381231 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1209 05:30:37.936180 1381231 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4
	I1209 05:30:37.936189 1381231 cache.go:65] Caching tarball of preloaded images
	I1209 05:30:37.936353 1381231 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local docker daemon
	I1209 05:30:37.936552 1381231 preload.go:238] Found /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1209 05:30:37.936564 1381231 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on containerd
	I1209 05:30:37.936672 1381231 profile.go:143] Saving config to /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/force-systemd-flag-288240/config.json ...
	I1209 05:30:37.936690 1381231 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/force-systemd-flag-288240/config.json: {Name:mk4afa663b4c600f84135b5f28df644c64820ece Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 05:30:37.960732 1381231 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local docker daemon, skipping pull
	I1209 05:30:37.960760 1381231 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c exists in daemon, skipping load
	I1209 05:30:37.960774 1381231 cache.go:243] Successfully downloaded all kic artifacts
	I1209 05:30:37.960802 1381231 start.go:360] acquireMachinesLock for force-systemd-flag-288240: {Name:mkef3180eda93ec0163ce88f1b03e3535e91c367 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 05:30:37.960909 1381231 start.go:364] duration metric: took 86.792µs to acquireMachinesLock for "force-systemd-flag-288240"
	I1209 05:30:37.960941 1381231 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-288240 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:force-systemd-flag-288240 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath
: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1209 05:30:37.961011 1381231 start.go:125] createHost starting for "" (driver="docker")
	I1209 05:30:37.964341 1381231 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1209 05:30:37.964588 1381231 start.go:159] libmachine.API.Create for "force-systemd-flag-288240" (driver="docker")
	I1209 05:30:37.964621 1381231 client.go:173] LocalClient.Create starting
	I1209 05:30:37.964679 1381231 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem
	I1209 05:30:37.964715 1381231 main.go:143] libmachine: Decoding PEM data...
	I1209 05:30:37.964738 1381231 main.go:143] libmachine: Parsing certificate...
	I1209 05:30:37.964799 1381231 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/cert.pem
	I1209 05:30:37.964821 1381231 main.go:143] libmachine: Decoding PEM data...
	I1209 05:30:37.964836 1381231 main.go:143] libmachine: Parsing certificate...
	I1209 05:30:37.965193 1381231 cli_runner.go:164] Run: docker network inspect force-systemd-flag-288240 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1209 05:30:37.984130 1381231 cli_runner.go:211] docker network inspect force-systemd-flag-288240 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1209 05:30:37.984227 1381231 network_create.go:284] running [docker network inspect force-systemd-flag-288240] to gather additional debugging logs...
	I1209 05:30:37.984250 1381231 cli_runner.go:164] Run: docker network inspect force-systemd-flag-288240
	W1209 05:30:38.000926 1381231 cli_runner.go:211] docker network inspect force-systemd-flag-288240 returned with exit code 1
	I1209 05:30:38.000958 1381231 network_create.go:287] error running [docker network inspect force-systemd-flag-288240]: docker network inspect force-systemd-flag-288240: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-flag-288240 not found
	I1209 05:30:38.000973 1381231 network_create.go:289] output of [docker network inspect force-systemd-flag-288240]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-flag-288240 not found
	
	** /stderr **
	I1209 05:30:38.001068 1381231 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1209 05:30:38.020208 1381231 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-7a15eec16b1a IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:8a:b7:58:bc:12:6c} reservation:<nil>}
	I1209 05:30:38.020617 1381231 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-fcb9e6b38e8e IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:56:c3:7a:b4:06:4b} reservation:<nil>}
	I1209 05:30:38.020874 1381231 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-8c1346c67d6b IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:82:10:14:75:55:fb} reservation:<nil>}
	I1209 05:30:38.021164 1381231 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-3a054111a305 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:56:3c:4d:2e:1d:cb} reservation:<nil>}
	I1209 05:30:38.021652 1381231 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400196fd50}
	I1209 05:30:38.021677 1381231 network_create.go:124] attempt to create docker network force-systemd-flag-288240 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1209 05:30:38.021742 1381231 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-288240 force-systemd-flag-288240
	I1209 05:30:38.085134 1381231 network_create.go:108] docker network force-systemd-flag-288240 192.168.85.0/24 created
	I1209 05:30:38.085165 1381231 kic.go:121] calculated static IP "192.168.85.2" for the "force-systemd-flag-288240" container
	I1209 05:30:38.085240 1381231 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1209 05:30:38.102146 1381231 cli_runner.go:164] Run: docker volume create force-systemd-flag-288240 --label name.minikube.sigs.k8s.io=force-systemd-flag-288240 --label created_by.minikube.sigs.k8s.io=true
	I1209 05:30:38.120182 1381231 oci.go:103] Successfully created a docker volume force-systemd-flag-288240
	I1209 05:30:38.120262 1381231 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-288240-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-288240 --entrypoint /usr/bin/test -v force-systemd-flag-288240:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c -d /var/lib
	I1209 05:30:38.654687 1381231 oci.go:107] Successfully prepared a docker volume force-systemd-flag-288240
	I1209 05:30:38.654756 1381231 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1209 05:30:38.654768 1381231 kic.go:194] Starting extracting preloaded images to volume ...
	I1209 05:30:38.654842 1381231 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-288240:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c -I lz4 -xf /preloaded.tar -C /extractDir
	I1209 05:30:42.688826 1381231 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-288240:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c -I lz4 -xf /preloaded.tar -C /extractDir: (4.033944091s)
	I1209 05:30:42.688859 1381231 kic.go:203] duration metric: took 4.034086972s to extract preloaded images to volume ...
	W1209 05:30:42.689014 1381231 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1209 05:30:42.689134 1381231 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1209 05:30:42.740636 1381231 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-flag-288240 --name force-systemd-flag-288240 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-288240 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-flag-288240 --network force-systemd-flag-288240 --ip 192.168.85.2 --volume force-systemd-flag-288240:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c
	I1209 05:30:43.031122 1381231 cli_runner.go:164] Run: docker container inspect force-systemd-flag-288240 --format={{.State.Running}}
	I1209 05:30:43.054508 1381231 cli_runner.go:164] Run: docker container inspect force-systemd-flag-288240 --format={{.State.Status}}
	I1209 05:30:43.078350 1381231 cli_runner.go:164] Run: docker exec force-systemd-flag-288240 stat /var/lib/dpkg/alternatives/iptables
	I1209 05:30:43.128503 1381231 oci.go:144] the created container "force-systemd-flag-288240" has a running status.
	I1209 05:30:43.128536 1381231 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22081-1142328/.minikube/machines/force-systemd-flag-288240/id_rsa...
	I1209 05:30:43.296137 1381231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1142328/.minikube/machines/force-systemd-flag-288240/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1209 05:30:43.296185 1381231 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22081-1142328/.minikube/machines/force-systemd-flag-288240/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1209 05:30:43.317461 1381231 cli_runner.go:164] Run: docker container inspect force-systemd-flag-288240 --format={{.State.Status}}
	I1209 05:30:43.343897 1381231 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1209 05:30:43.343916 1381231 kic_runner.go:114] Args: [docker exec --privileged force-systemd-flag-288240 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1209 05:30:43.395088 1381231 cli_runner.go:164] Run: docker container inspect force-systemd-flag-288240 --format={{.State.Status}}
	I1209 05:30:43.420268 1381231 machine.go:94] provisionDockerMachine start ...
	I1209 05:30:43.420372 1381231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-288240
	I1209 05:30:43.450340 1381231 main.go:143] libmachine: Using SSH client type: native
	I1209 05:30:43.450688 1381231 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 34150 <nil> <nil>}
	I1209 05:30:43.450699 1381231 main.go:143] libmachine: About to run SSH command:
	hostname
	I1209 05:30:43.451585 1381231 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:43088->127.0.0.1:34150: read: connection reset by peer
	I1209 05:30:46.607554 1381231 main.go:143] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-288240
	
	I1209 05:30:46.607579 1381231 ubuntu.go:182] provisioning hostname "force-systemd-flag-288240"
	I1209 05:30:46.607644 1381231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-288240
	I1209 05:30:46.625312 1381231 main.go:143] libmachine: Using SSH client type: native
	I1209 05:30:46.625627 1381231 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 34150 <nil> <nil>}
	I1209 05:30:46.625649 1381231 main.go:143] libmachine: About to run SSH command:
	sudo hostname force-systemd-flag-288240 && echo "force-systemd-flag-288240" | sudo tee /etc/hostname
	I1209 05:30:46.789396 1381231 main.go:143] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-288240
	
	I1209 05:30:46.789495 1381231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-288240
	I1209 05:30:46.806172 1381231 main.go:143] libmachine: Using SSH client type: native
	I1209 05:30:46.806492 1381231 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 34150 <nil> <nil>}
	I1209 05:30:46.806515 1381231 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-flag-288240' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-flag-288240/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-flag-288240' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 05:30:46.956560 1381231 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1209 05:30:46.956590 1381231 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22081-1142328/.minikube CaCertPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22081-1142328/.minikube}
	I1209 05:30:46.956616 1381231 ubuntu.go:190] setting up certificates
	I1209 05:30:46.956625 1381231 provision.go:84] configureAuth start
	I1209 05:30:46.956693 1381231 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-288240
	I1209 05:30:46.979607 1381231 provision.go:143] copyHostCerts
	I1209 05:30:46.979650 1381231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.pem
	I1209 05:30:46.979687 1381231 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.pem, removing ...
	I1209 05:30:46.979700 1381231 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.pem
	I1209 05:30:46.979778 1381231 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.pem (1078 bytes)
	I1209 05:30:46.979865 1381231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22081-1142328/.minikube/cert.pem
	I1209 05:30:46.979888 1381231 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-1142328/.minikube/cert.pem, removing ...
	I1209 05:30:46.979898 1381231 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-1142328/.minikube/cert.pem
	I1209 05:30:46.979925 1381231 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22081-1142328/.minikube/cert.pem (1123 bytes)
	I1209 05:30:46.979979 1381231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22081-1142328/.minikube/key.pem
	I1209 05:30:46.979999 1381231 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-1142328/.minikube/key.pem, removing ...
	I1209 05:30:46.980006 1381231 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-1142328/.minikube/key.pem
	I1209 05:30:46.980104 1381231 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22081-1142328/.minikube/key.pem (1675 bytes)
	I1209 05:30:46.980192 1381231 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca-key.pem org=jenkins.force-systemd-flag-288240 san=[127.0.0.1 192.168.85.2 force-systemd-flag-288240 localhost minikube]
	I1209 05:30:47.520864 1381231 provision.go:177] copyRemoteCerts
	I1209 05:30:47.520968 1381231 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 05:30:47.521033 1381231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-288240
	I1209 05:30:47.549744 1381231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34150 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/force-systemd-flag-288240/id_rsa Username:docker}
	I1209 05:30:47.659482 1381231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1209 05:30:47.659546 1381231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1209 05:30:47.675948 1381231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1209 05:30:47.676100 1381231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1209 05:30:47.695646 1381231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1209 05:30:47.695712 1381231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1209 05:30:47.714905 1381231 provision.go:87] duration metric: took 758.257797ms to configureAuth
	I1209 05:30:47.714935 1381231 ubuntu.go:206] setting minikube options for container-runtime
	I1209 05:30:47.715123 1381231 config.go:182] Loaded profile config "force-systemd-flag-288240": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1209 05:30:47.715144 1381231 machine.go:97] duration metric: took 4.294858248s to provisionDockerMachine
	I1209 05:30:47.715152 1381231 client.go:176] duration metric: took 9.750521553s to LocalClient.Create
	I1209 05:30:47.715164 1381231 start.go:167] duration metric: took 9.750578101s to libmachine.API.Create "force-systemd-flag-288240"
	I1209 05:30:47.715172 1381231 start.go:293] postStartSetup for "force-systemd-flag-288240" (driver="docker")
	I1209 05:30:47.715187 1381231 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 05:30:47.715243 1381231 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 05:30:47.715289 1381231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-288240
	I1209 05:30:47.735381 1381231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34150 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/force-systemd-flag-288240/id_rsa Username:docker}
	I1209 05:30:47.844315 1381231 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 05:30:47.847698 1381231 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1209 05:30:47.847726 1381231 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1209 05:30:47.847737 1381231 filesync.go:126] Scanning /home/jenkins/minikube-integration/22081-1142328/.minikube/addons for local assets ...
	I1209 05:30:47.847793 1381231 filesync.go:126] Scanning /home/jenkins/minikube-integration/22081-1142328/.minikube/files for local assets ...
	I1209 05:30:47.847884 1381231 filesync.go:149] local asset: /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/ssl/certs/11442312.pem -> 11442312.pem in /etc/ssl/certs
	I1209 05:30:47.847895 1381231 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/ssl/certs/11442312.pem -> /etc/ssl/certs/11442312.pem
	I1209 05:30:47.847994 1381231 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 05:30:47.856051 1381231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/ssl/certs/11442312.pem --> /etc/ssl/certs/11442312.pem (1708 bytes)
	I1209 05:30:47.874233 1381231 start.go:296] duration metric: took 159.039966ms for postStartSetup
	I1209 05:30:47.874615 1381231 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-288240
	I1209 05:30:47.891005 1381231 profile.go:143] Saving config to /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/force-systemd-flag-288240/config.json ...
	I1209 05:30:47.891293 1381231 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1209 05:30:47.891342 1381231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-288240
	I1209 05:30:47.907438 1381231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34150 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/force-systemd-flag-288240/id_rsa Username:docker}
	I1209 05:30:48.013921 1381231 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1209 05:30:48.019336 1381231 start.go:128] duration metric: took 10.058301363s to createHost
	I1209 05:30:48.019370 1381231 start.go:83] releasing machines lock for "force-systemd-flag-288240", held for 10.058444678s
	I1209 05:30:48.019450 1381231 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-288240
	I1209 05:30:48.037670 1381231 ssh_runner.go:195] Run: cat /version.json
	I1209 05:30:48.037702 1381231 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 05:30:48.037733 1381231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-288240
	I1209 05:30:48.037771 1381231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-288240
	I1209 05:30:48.060324 1381231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34150 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/force-systemd-flag-288240/id_rsa Username:docker}
	I1209 05:30:48.064260 1381231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34150 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/force-systemd-flag-288240/id_rsa Username:docker}
	I1209 05:30:48.163657 1381231 ssh_runner.go:195] Run: systemctl --version
	I1209 05:30:48.254989 1381231 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 05:30:48.259244 1381231 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 05:30:48.259317 1381231 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 05:30:48.286785 1381231 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1209 05:30:48.286861 1381231 start.go:496] detecting cgroup driver to use...
	I1209 05:30:48.286888 1381231 start.go:500] using "systemd" cgroup driver as enforced via flags
	I1209 05:30:48.286972 1381231 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1209 05:30:48.302809 1381231 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1209 05:30:48.316037 1381231 docker.go:218] disabling cri-docker service (if available) ...
	I1209 05:30:48.316099 1381231 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 05:30:48.332773 1381231 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 05:30:48.350893 1381231 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 05:30:48.485786 1381231 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 05:30:48.615441 1381231 docker.go:234] disabling docker service ...
	I1209 05:30:48.615517 1381231 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 05:30:48.636721 1381231 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 05:30:48.649543 1381231 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 05:30:48.769631 1381231 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 05:30:48.896214 1381231 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 05:30:48.909258 1381231 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 05:30:48.922392 1381231 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1209 05:30:48.931027 1381231 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1209 05:30:48.939522 1381231 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I1209 05:30:48.939599 1381231 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1209 05:30:48.948226 1381231 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1209 05:30:48.956600 1381231 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1209 05:30:48.965122 1381231 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1209 05:30:48.973486 1381231 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 05:30:48.981115 1381231 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1209 05:30:48.989380 1381231 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1209 05:30:48.997558 1381231 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1209 05:30:49.007004 1381231 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 05:30:49.014829 1381231 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 05:30:49.022110 1381231 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 05:30:49.142736 1381231 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1209 05:30:49.301405 1381231 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1209 05:30:49.301483 1381231 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1209 05:30:49.305467 1381231 start.go:564] Will wait 60s for crictl version
	I1209 05:30:49.305554 1381231 ssh_runner.go:195] Run: which crictl
	I1209 05:30:49.309099 1381231 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1209 05:30:49.332545 1381231 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1209 05:30:49.332664 1381231 ssh_runner.go:195] Run: containerd --version
	I1209 05:30:49.357348 1381231 ssh_runner.go:195] Run: containerd --version
	I1209 05:30:49.381782 1381231 out.go:179] * Preparing Kubernetes v1.34.2 on containerd 2.2.0 ...
	I1209 05:30:50.050398 1340508 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000409812s
	I1209 05:30:50.050683 1340508 kubeadm.go:319] 
	I1209 05:30:50.050753 1340508 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1209 05:30:50.050787 1340508 kubeadm.go:319] 	- The kubelet is not running
	I1209 05:30:50.050892 1340508 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1209 05:30:50.050898 1340508 kubeadm.go:319] 
	I1209 05:30:50.051002 1340508 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1209 05:30:50.051034 1340508 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1209 05:30:50.051065 1340508 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1209 05:30:50.051069 1340508 kubeadm.go:319] 
	I1209 05:30:50.054925 1340508 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1209 05:30:50.055367 1340508 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1209 05:30:50.055479 1340508 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1209 05:30:50.055716 1340508 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1209 05:30:50.055723 1340508 kubeadm.go:319] 
	I1209 05:30:50.055792 1340508 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1209 05:30:50.055845 1340508 kubeadm.go:403] duration metric: took 12m17.527665319s to StartCluster
	I1209 05:30:50.055878 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:30:50.055939 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:30:50.090018 1340508 cri.go:89] found id: ""
	I1209 05:30:50.090041 1340508 logs.go:282] 0 containers: []
	W1209 05:30:50.090049 1340508 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:30:50.090055 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:30:50.090116 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:30:50.122670 1340508 cri.go:89] found id: ""
	I1209 05:30:50.122691 1340508 logs.go:282] 0 containers: []
	W1209 05:30:50.122700 1340508 logs.go:284] No container was found matching "etcd"
	I1209 05:30:50.122706 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:30:50.122771 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:30:50.158597 1340508 cri.go:89] found id: ""
	I1209 05:30:50.158617 1340508 logs.go:282] 0 containers: []
	W1209 05:30:50.158626 1340508 logs.go:284] No container was found matching "coredns"
	I1209 05:30:50.158632 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:30:50.158688 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:30:50.194645 1340508 cri.go:89] found id: ""
	I1209 05:30:50.194671 1340508 logs.go:282] 0 containers: []
	W1209 05:30:50.194679 1340508 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:30:50.194689 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:30:50.194764 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:30:50.234640 1340508 cri.go:89] found id: ""
	I1209 05:30:50.234661 1340508 logs.go:282] 0 containers: []
	W1209 05:30:50.234670 1340508 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:30:50.234676 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:30:50.234735 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:30:50.262325 1340508 cri.go:89] found id: ""
	I1209 05:30:50.262406 1340508 logs.go:282] 0 containers: []
	W1209 05:30:50.262445 1340508 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:30:50.262455 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:30:50.262543 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:30:50.294557 1340508 cri.go:89] found id: ""
	I1209 05:30:50.294584 1340508 logs.go:282] 0 containers: []
	W1209 05:30:50.294591 1340508 logs.go:284] No container was found matching "kindnet"
	I1209 05:30:50.294598 1340508 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:30:50.294703 1340508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:30:50.327015 1340508 cri.go:89] found id: ""
	I1209 05:30:50.327087 1340508 logs.go:282] 0 containers: []
	W1209 05:30:50.327098 1340508 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:30:50.327183 1340508 logs.go:123] Gathering logs for kubelet ...
	I1209 05:30:50.327225 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:30:50.396351 1340508 logs.go:123] Gathering logs for dmesg ...
	I1209 05:30:50.396435 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:30:50.413877 1340508 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:30:50.413902 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:30:50.557364 1340508 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:30:50.557430 1340508 logs.go:123] Gathering logs for containerd ...
	I1209 05:30:50.557455 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:30:50.610343 1340508 logs.go:123] Gathering logs for container status ...
	I1209 05:30:50.610422 1340508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1209 05:30:50.651163 1340508 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000409812s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1209 05:30:50.651205 1340508 out.go:285] * 
	W1209 05:30:50.651255 1340508 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000409812s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1209 05:30:50.651265 1340508 out.go:285] * 
	W1209 05:30:50.653400 1340508 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 05:30:50.659043 1340508 out.go:203] 
	W1209 05:30:50.662759 1340508 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000409812s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1209 05:30:50.662799 1340508 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1209 05:30:50.662819 1340508 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1209 05:30:50.666095 1340508 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 09 05:22:43 kubernetes-upgrade-511751 containerd[554]: time="2025-12-09T05:22:43.659270169Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 09 05:22:43 kubernetes-upgrade-511751 containerd[554]: time="2025-12-09T05:22:43.660575982Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.13.1\" with image id \"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\", repo tag \"registry.k8s.io/coredns/coredns:v1.13.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\", size \"21168808\" in 1.50753571s"
	Dec 09 05:22:43 kubernetes-upgrade-511751 containerd[554]: time="2025-12-09T05:22:43.660619435Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\" returns image reference \"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\""
	Dec 09 05:22:43 kubernetes-upgrade-511751 containerd[554]: time="2025-12-09T05:22:43.662107085Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\""
	Dec 09 05:22:44 kubernetes-upgrade-511751 containerd[554]: time="2025-12-09T05:22:44.305546815Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}"
	Dec 09 05:22:44 kubernetes-upgrade-511751 containerd[554]: time="2025-12-09T05:22:44.307405155Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=268709"
	Dec 09 05:22:44 kubernetes-upgrade-511751 containerd[554]: time="2025-12-09T05:22:44.309914662Z" level=info msg="ImageCreate event name:\"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}"
	Dec 09 05:22:44 kubernetes-upgrade-511751 containerd[554]: time="2025-12-09T05:22:44.313543608Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}"
	Dec 09 05:22:44 kubernetes-upgrade-511751 containerd[554]: time="2025-12-09T05:22:44.314417850Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"267939\" in 652.277847ms"
	Dec 09 05:22:44 kubernetes-upgrade-511751 containerd[554]: time="2025-12-09T05:22:44.314540637Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\""
	Dec 09 05:22:44 kubernetes-upgrade-511751 containerd[554]: time="2025-12-09T05:22:44.315438813Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\""
	Dec 09 05:22:46 kubernetes-upgrade-511751 containerd[554]: time="2025-12-09T05:22:46.326799990Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 09 05:22:46 kubernetes-upgrade-511751 containerd[554]: time="2025-12-09T05:22:46.328948604Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.5-0: active requests=0, bytes read=21140371"
	Dec 09 05:22:46 kubernetes-upgrade-511751 containerd[554]: time="2025-12-09T05:22:46.331400045Z" level=info msg="ImageCreate event name:\"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 09 05:22:46 kubernetes-upgrade-511751 containerd[554]: time="2025-12-09T05:22:46.335514582Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 09 05:22:46 kubernetes-upgrade-511751 containerd[554]: time="2025-12-09T05:22:46.336789232Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.5-0\" with image id \"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\", repo tag \"registry.k8s.io/etcd:3.6.5-0\", repo digest \"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\", size \"21136588\" in 2.02131481s"
	Dec 09 05:22:46 kubernetes-upgrade-511751 containerd[554]: time="2025-12-09T05:22:46.336919502Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\" returns image reference \"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\""
	Dec 09 05:27:35 kubernetes-upgrade-511751 containerd[554]: time="2025-12-09T05:27:35.781511029Z" level=info msg="container event discarded" container=40764729e712eb71d4f1055e6b2f3e3d38a4a513cfe7ac6b821607b35482332b type=CONTAINER_DELETED_EVENT
	Dec 09 05:27:35 kubernetes-upgrade-511751 containerd[554]: time="2025-12-09T05:27:35.795467074Z" level=info msg="container event discarded" container=f70d4cdfd07e87a733484f1474d237d67441b116c8195908ce754cc8c6aa58d2 type=CONTAINER_DELETED_EVENT
	Dec 09 05:27:35 kubernetes-upgrade-511751 containerd[554]: time="2025-12-09T05:27:35.807944709Z" level=info msg="container event discarded" container=ca2668b1a0a3d53ab75ca83dd34698623ae3b838db0be118632ff51d82a28d0a type=CONTAINER_DELETED_EVENT
	Dec 09 05:27:35 kubernetes-upgrade-511751 containerd[554]: time="2025-12-09T05:27:35.808007312Z" level=info msg="container event discarded" container=4d82f126efea26f378c4bb89a45cbe1acfa672553b2e25fb00ad8e65018b15aa type=CONTAINER_DELETED_EVENT
	Dec 09 05:27:35 kubernetes-upgrade-511751 containerd[554]: time="2025-12-09T05:27:35.824261988Z" level=info msg="container event discarded" container=f132b6d8bb2a356aeaccfeaeec760fa73d671f1c17dbdfe7542c70d5382b39da type=CONTAINER_DELETED_EVENT
	Dec 09 05:27:35 kubernetes-upgrade-511751 containerd[554]: time="2025-12-09T05:27:35.824327373Z" level=info msg="container event discarded" container=bd151919686a251657f3179ba0cb979ceeadf3759e85280de3dd3d87ab714b9b type=CONTAINER_DELETED_EVENT
	Dec 09 05:27:35 kubernetes-upgrade-511751 containerd[554]: time="2025-12-09T05:27:35.840561856Z" level=info msg="container event discarded" container=7af9450f91496ab48a9a983dc77379d54c30461f3b6d878b1a42c4b9abeec547 type=CONTAINER_DELETED_EVENT
	Dec 09 05:27:35 kubernetes-upgrade-511751 containerd[554]: time="2025-12-09T05:27:35.840631860Z" level=info msg="container event discarded" container=1df901b40f73689e0282fb0f9eeabbdde1c456b5f10c7d59ed9c9f0e8ee6d8c0 type=CONTAINER_DELETED_EVENT
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 9 03:13] overlayfs: idmapped layers are currently not supported
	[ +25.904254] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:14] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:16] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:18] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:19] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:30] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:32] overlayfs: idmapped layers are currently not supported
	[ +28.114653] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:33] overlayfs: idmapped layers are currently not supported
	[ +23.720849] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:34] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:35] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:36] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:37] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:38] overlayfs: idmapped layers are currently not supported
	[ +23.656275] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:39] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:57] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:58] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:00] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:02] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:03] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:05] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:06] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 05:30:53 up  8:12,  0 user,  load average: 2.16, 1.54, 1.68
	Linux kubernetes-upgrade-511751 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 09 05:30:49 kubernetes-upgrade-511751 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 09 05:30:50 kubernetes-upgrade-511751 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 09 05:30:50 kubernetes-upgrade-511751 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 05:30:50 kubernetes-upgrade-511751 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 05:30:50 kubernetes-upgrade-511751 kubelet[14380]: E1209 05:30:50.775786   14380 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 09 05:30:50 kubernetes-upgrade-511751 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 09 05:30:50 kubernetes-upgrade-511751 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 09 05:30:51 kubernetes-upgrade-511751 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 09 05:30:51 kubernetes-upgrade-511751 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 05:30:51 kubernetes-upgrade-511751 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 05:30:51 kubernetes-upgrade-511751 kubelet[14385]: E1209 05:30:51.543484   14385 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 09 05:30:51 kubernetes-upgrade-511751 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 09 05:30:51 kubernetes-upgrade-511751 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 09 05:30:52 kubernetes-upgrade-511751 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Dec 09 05:30:52 kubernetes-upgrade-511751 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 05:30:52 kubernetes-upgrade-511751 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 05:30:52 kubernetes-upgrade-511751 kubelet[14392]: E1209 05:30:52.250334   14392 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 09 05:30:52 kubernetes-upgrade-511751 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 09 05:30:52 kubernetes-upgrade-511751 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 09 05:30:52 kubernetes-upgrade-511751 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 323.
	Dec 09 05:30:52 kubernetes-upgrade-511751 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 05:30:52 kubernetes-upgrade-511751 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 05:30:52 kubernetes-upgrade-511751 kubelet[14450]: E1209 05:30:52.995201   14450 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 09 05:30:52 kubernetes-upgrade-511751 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 09 05:30:52 kubernetes-upgrade-511751 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p kubernetes-upgrade-511751 -n kubernetes-upgrade-511751
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p kubernetes-upgrade-511751 -n kubernetes-upgrade-511751: exit status 2 (455.327609ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "kubernetes-upgrade-511751" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-511751" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-511751
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-511751: (2.49742009s)
--- FAIL: TestKubernetesUpgrade (796.41s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (515.83s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-842269 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
E1209 05:35:32.751156 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 05:36:06.732125 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/addons-221952/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p no-preload-842269 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: exit status 109 (8m34.268770649s)

                                                
                                                
-- stdout --
	* [no-preload-842269] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22081
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22081-1142328/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-1142328/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "no-preload-842269" primary control-plane node in "no-preload-842269" cluster
	* Pulling base image v0.0.48-1765184860-22066 ...
	* Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 05:35:09.187227 1404644 out.go:360] Setting OutFile to fd 1 ...
	I1209 05:35:09.187791 1404644 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 05:35:09.187826 1404644 out.go:374] Setting ErrFile to fd 2...
	I1209 05:35:09.187845 1404644 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 05:35:09.188173 1404644 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-1142328/.minikube/bin
	I1209 05:35:09.188625 1404644 out.go:368] Setting JSON to false
	I1209 05:35:09.189603 1404644 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":29833,"bootTime":1765228677,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1209 05:35:09.189697 1404644 start.go:143] virtualization:  
	I1209 05:35:09.193750 1404644 out.go:179] * [no-preload-842269] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1209 05:35:09.197213 1404644 out.go:179]   - MINIKUBE_LOCATION=22081
	I1209 05:35:09.197282 1404644 notify.go:221] Checking for updates...
	I1209 05:35:09.204177 1404644 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 05:35:09.208296 1404644 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22081-1142328/kubeconfig
	I1209 05:35:09.211408 1404644 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-1142328/.minikube
	I1209 05:35:09.215747 1404644 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1209 05:35:09.218838 1404644 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 05:35:09.222394 1404644 config.go:182] Loaded profile config "embed-certs-432108": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1209 05:35:09.222500 1404644 driver.go:422] Setting default libvirt URI to qemu:///system
	I1209 05:35:09.265426 1404644 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1209 05:35:09.265562 1404644 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 05:35:09.361122 1404644 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-09 05:35:09.343773534 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1209 05:35:09.361223 1404644 docker.go:319] overlay module found
	I1209 05:35:09.364377 1404644 out.go:179] * Using the docker driver based on user configuration
	I1209 05:35:09.367242 1404644 start.go:309] selected driver: docker
	I1209 05:35:09.367260 1404644 start.go:927] validating driver "docker" against <nil>
	I1209 05:35:09.367279 1404644 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 05:35:09.368055 1404644 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 05:35:09.451782 1404644 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-09 05:35:09.441016062 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1209 05:35:09.451946 1404644 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1209 05:35:09.452217 1404644 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 05:35:09.455216 1404644 out.go:179] * Using Docker driver with root privileges
	I1209 05:35:09.457915 1404644 cni.go:84] Creating CNI manager for ""
	I1209 05:35:09.457975 1404644 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1209 05:35:09.457988 1404644 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1209 05:35:09.458063 1404644 start.go:353] cluster config:
	{Name:no-preload-842269 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-842269 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 05:35:09.460918 1404644 out.go:179] * Starting "no-preload-842269" primary control-plane node in "no-preload-842269" cluster
	I1209 05:35:09.463694 1404644 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1209 05:35:09.466523 1404644 out.go:179] * Pulling base image v0.0.48-1765184860-22066 ...
	I1209 05:35:09.469228 1404644 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1209 05:35:09.469364 1404644 profile.go:143] Saving config to /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/no-preload-842269/config.json ...
	I1209 05:35:09.469405 1404644 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/no-preload-842269/config.json: {Name:mk28b84001ad1b8d21774de7f6c75bd5add492da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 05:35:09.469607 1404644 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local docker daemon
	I1209 05:35:09.469891 1404644 cache.go:107] acquiring lock: {Name:mkf65d4ffaf3daf987b7ba0301a9962f00106981 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 05:35:09.469967 1404644 cache.go:115] /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1209 05:35:09.469980 1404644 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22081-1142328/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 97.589µs
	I1209 05:35:09.469993 1404644 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1209 05:35:09.470009 1404644 cache.go:107] acquiring lock: {Name:mk4d0c4ab95f11691dbecfbd7b2c72b3028abf9f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 05:35:09.470084 1404644 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1209 05:35:09.470433 1404644 cache.go:107] acquiring lock: {Name:mk7cb8e420e05ffddcb417dedf3ddace46afcf1b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 05:35:09.470537 1404644 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1209 05:35:09.470750 1404644 cache.go:107] acquiring lock: {Name:mka2eb1b7c29ae7ae604d5f65c47b25198cfb45b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 05:35:09.470844 1404644 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1209 05:35:09.471078 1404644 cache.go:107] acquiring lock: {Name:mkade1779cb2ecc1c54a36bd1719bf2ef87bdf51 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 05:35:09.471189 1404644 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1209 05:35:09.471421 1404644 cache.go:107] acquiring lock: {Name:mk604b76e7428f7b39bf507a7086fea810617cc7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 05:35:09.471486 1404644 cache.go:115] /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1209 05:35:09.471499 1404644 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22081-1142328/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 82.443µs
	I1209 05:35:09.471506 1404644 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1209 05:35:09.471524 1404644 cache.go:107] acquiring lock: {Name:mk605cb0bdcc667f1a6cc01dc2d318b41822c88f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 05:35:09.471566 1404644 cache.go:115] /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1209 05:35:09.471576 1404644 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/22081-1142328/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 53.463µs
	I1209 05:35:09.471589 1404644 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1209 05:35:09.471605 1404644 cache.go:107] acquiring lock: {Name:mk288542758fec96b5cb8ac3de75700c31bfbfc0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 05:35:09.471670 1404644 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1209 05:35:09.474791 1404644 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1209 05:35:09.475384 1404644 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1209 05:35:09.475623 1404644 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1209 05:35:09.475831 1404644 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1209 05:35:09.476049 1404644 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1209 05:35:09.530312 1404644 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local docker daemon, skipping pull
	I1209 05:35:09.530345 1404644 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c exists in daemon, skipping load
	I1209 05:35:09.530377 1404644 cache.go:243] Successfully downloaded all kic artifacts
	I1209 05:35:09.530427 1404644 start.go:360] acquireMachinesLock for no-preload-842269: {Name:mk19b7be61094a19b29603fb95f6d7b282529614 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 05:35:09.530610 1404644 start.go:364] duration metric: took 154.638µs to acquireMachinesLock for "no-preload-842269"
	I1209 05:35:09.530670 1404644 start.go:93] Provisioning new machine with config: &{Name:no-preload-842269 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-842269 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNS
Log:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1209 05:35:09.530840 1404644 start.go:125] createHost starting for "" (driver="docker")
	I1209 05:35:09.534449 1404644 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1209 05:35:09.534824 1404644 start.go:159] libmachine.API.Create for "no-preload-842269" (driver="docker")
	I1209 05:35:09.534894 1404644 client.go:173] LocalClient.Create starting
	I1209 05:35:09.535016 1404644 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem
	I1209 05:35:09.535084 1404644 main.go:143] libmachine: Decoding PEM data...
	I1209 05:35:09.535109 1404644 main.go:143] libmachine: Parsing certificate...
	I1209 05:35:09.535228 1404644 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/cert.pem
	I1209 05:35:09.535302 1404644 main.go:143] libmachine: Decoding PEM data...
	I1209 05:35:09.535339 1404644 main.go:143] libmachine: Parsing certificate...
	I1209 05:35:09.535848 1404644 cli_runner.go:164] Run: docker network inspect no-preload-842269 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1209 05:35:09.554336 1404644 cli_runner.go:211] docker network inspect no-preload-842269 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1209 05:35:09.554422 1404644 network_create.go:284] running [docker network inspect no-preload-842269] to gather additional debugging logs...
	I1209 05:35:09.554444 1404644 cli_runner.go:164] Run: docker network inspect no-preload-842269
	W1209 05:35:09.573502 1404644 cli_runner.go:211] docker network inspect no-preload-842269 returned with exit code 1
	I1209 05:35:09.573536 1404644 network_create.go:287] error running [docker network inspect no-preload-842269]: docker network inspect no-preload-842269: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-842269 not found
	I1209 05:35:09.573551 1404644 network_create.go:289] output of [docker network inspect no-preload-842269]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-842269 not found
	
	** /stderr **
	I1209 05:35:09.573648 1404644 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1209 05:35:09.606397 1404644 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-7a15eec16b1a IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:8a:b7:58:bc:12:6c} reservation:<nil>}
	I1209 05:35:09.606858 1404644 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-fcb9e6b38e8e IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:56:c3:7a:b4:06:4b} reservation:<nil>}
	I1209 05:35:09.607152 1404644 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-8c1346c67d6b IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:82:10:14:75:55:fb} reservation:<nil>}
	I1209 05:35:09.607689 1404644 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-004885322f81 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:8a:e0:88:0a:1d:1b} reservation:<nil>}
	I1209 05:35:09.608306 1404644 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001b6dd00}
	I1209 05:35:09.608337 1404644 network_create.go:124] attempt to create docker network no-preload-842269 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1209 05:35:09.608399 1404644 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-842269 no-preload-842269
	I1209 05:35:09.701249 1404644 network_create.go:108] docker network no-preload-842269 192.168.85.0/24 created
	I1209 05:35:09.701283 1404644 kic.go:121] calculated static IP "192.168.85.2" for the "no-preload-842269" container
	I1209 05:35:09.701356 1404644 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1209 05:35:09.720987 1404644 cli_runner.go:164] Run: docker volume create no-preload-842269 --label name.minikube.sigs.k8s.io=no-preload-842269 --label created_by.minikube.sigs.k8s.io=true
	I1209 05:35:09.741661 1404644 oci.go:103] Successfully created a docker volume no-preload-842269
	I1209 05:35:09.741764 1404644 cli_runner.go:164] Run: docker run --rm --name no-preload-842269-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-842269 --entrypoint /usr/bin/test -v no-preload-842269:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c -d /var/lib
	I1209 05:35:09.842389 1404644 cache.go:162] opening:  /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0
	I1209 05:35:09.844295 1404644 cache.go:162] opening:  /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0
	I1209 05:35:09.923271 1404644 cache.go:162] opening:  /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0
	I1209 05:35:09.924058 1404644 cache.go:162] opening:  /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1
	I1209 05:35:09.995102 1404644 cache.go:162] opening:  /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0
	I1209 05:35:10.436995 1404644 cache.go:157] /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1209 05:35:10.437067 1404644 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22081-1142328/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 965.98945ms
	I1209 05:35:10.437096 1404644 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1209 05:35:10.482462 1404644 oci.go:107] Successfully prepared a docker volume no-preload-842269
	I1209 05:35:10.482572 1404644 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	W1209 05:35:10.482736 1404644 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1209 05:35:10.482898 1404644 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1209 05:35:10.595523 1404644 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-842269 --name no-preload-842269 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-842269 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-842269 --network no-preload-842269 --ip 192.168.85.2 --volume no-preload-842269:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c
	I1209 05:35:11.002082 1404644 cache.go:157] /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1209 05:35:11.002110 1404644 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22081-1142328/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 1.532101621s
	I1209 05:35:11.002124 1404644 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1209 05:35:11.034023 1404644 cache.go:157] /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1209 05:35:11.034064 1404644 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22081-1142328/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 1.563313926s
	I1209 05:35:11.034079 1404644 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1209 05:35:11.042977 1404644 cache.go:157] /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1209 05:35:11.043001 1404644 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22081-1142328/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 1.571395725s
	I1209 05:35:11.043013 1404644 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1209 05:35:11.051000 1404644 cache.go:157] /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1209 05:35:11.051073 1404644 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22081-1142328/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 1.580643121s
	I1209 05:35:11.051112 1404644 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1209 05:35:11.051151 1404644 cache.go:87] Successfully saved all images to host disk.
	I1209 05:35:11.100562 1404644 cli_runner.go:164] Run: docker container inspect no-preload-842269 --format={{.State.Running}}
	I1209 05:35:11.129310 1404644 cli_runner.go:164] Run: docker container inspect no-preload-842269 --format={{.State.Status}}
	I1209 05:35:11.157308 1404644 cli_runner.go:164] Run: docker exec no-preload-842269 stat /var/lib/dpkg/alternatives/iptables
	I1209 05:35:11.209881 1404644 oci.go:144] the created container "no-preload-842269" has a running status.
	I1209 05:35:11.209913 1404644 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22081-1142328/.minikube/machines/no-preload-842269/id_rsa...
	I1209 05:35:11.959389 1404644 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22081-1142328/.minikube/machines/no-preload-842269/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1209 05:35:11.997729 1404644 cli_runner.go:164] Run: docker container inspect no-preload-842269 --format={{.State.Status}}
	I1209 05:35:12.025752 1404644 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1209 05:35:12.025781 1404644 kic_runner.go:114] Args: [docker exec --privileged no-preload-842269 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1209 05:35:12.106916 1404644 cli_runner.go:164] Run: docker container inspect no-preload-842269 --format={{.State.Status}}
	I1209 05:35:12.125876 1404644 machine.go:94] provisionDockerMachine start ...
	I1209 05:35:12.125968 1404644 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-842269
	I1209 05:35:12.144220 1404644 main.go:143] libmachine: Using SSH client type: native
	I1209 05:35:12.144643 1404644 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 34185 <nil> <nil>}
	I1209 05:35:12.144658 1404644 main.go:143] libmachine: About to run SSH command:
	hostname
	I1209 05:35:12.145353 1404644 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1209 05:35:15.316490 1404644 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-842269
	
	I1209 05:35:15.316516 1404644 ubuntu.go:182] provisioning hostname "no-preload-842269"
	I1209 05:35:15.316637 1404644 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-842269
	I1209 05:35:15.340121 1404644 main.go:143] libmachine: Using SSH client type: native
	I1209 05:35:15.340453 1404644 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 34185 <nil> <nil>}
	I1209 05:35:15.340472 1404644 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-842269 && echo "no-preload-842269" | sudo tee /etc/hostname
	I1209 05:35:15.517027 1404644 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-842269
	
	I1209 05:35:15.517201 1404644 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-842269
	I1209 05:35:15.539058 1404644 main.go:143] libmachine: Using SSH client type: native
	I1209 05:35:15.539399 1404644 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 34185 <nil> <nil>}
	I1209 05:35:15.539422 1404644 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-842269' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-842269/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-842269' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 05:35:15.701244 1404644 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1209 05:35:15.701273 1404644 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22081-1142328/.minikube CaCertPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22081-1142328/.minikube}
	I1209 05:35:15.701316 1404644 ubuntu.go:190] setting up certificates
	I1209 05:35:15.701325 1404644 provision.go:84] configureAuth start
	I1209 05:35:15.701418 1404644 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-842269
	I1209 05:35:15.725573 1404644 provision.go:143] copyHostCerts
	I1209 05:35:15.725657 1404644 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.pem, removing ...
	I1209 05:35:15.725674 1404644 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.pem
	I1209 05:35:15.725760 1404644 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.pem (1078 bytes)
	I1209 05:35:15.725873 1404644 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-1142328/.minikube/cert.pem, removing ...
	I1209 05:35:15.725884 1404644 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-1142328/.minikube/cert.pem
	I1209 05:35:15.725916 1404644 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22081-1142328/.minikube/cert.pem (1123 bytes)
	I1209 05:35:15.725991 1404644 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-1142328/.minikube/key.pem, removing ...
	I1209 05:35:15.726001 1404644 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-1142328/.minikube/key.pem
	I1209 05:35:15.726028 1404644 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22081-1142328/.minikube/key.pem (1675 bytes)
	I1209 05:35:15.726081 1404644 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca-key.pem org=jenkins.no-preload-842269 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-842269]
	I1209 05:35:15.908916 1404644 provision.go:177] copyRemoteCerts
	I1209 05:35:15.909023 1404644 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 05:35:15.909096 1404644 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-842269
	I1209 05:35:15.930076 1404644 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34185 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/no-preload-842269/id_rsa Username:docker}
	I1209 05:35:16.041139 1404644 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1209 05:35:16.061900 1404644 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1209 05:35:16.082257 1404644 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1209 05:35:16.102640 1404644 provision.go:87] duration metric: took 401.290816ms to configureAuth
	I1209 05:35:16.102707 1404644 ubuntu.go:206] setting minikube options for container-runtime
	I1209 05:35:16.102926 1404644 config.go:182] Loaded profile config "no-preload-842269": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1209 05:35:16.102956 1404644 machine.go:97] duration metric: took 3.977062321s to provisionDockerMachine
	I1209 05:35:16.102977 1404644 client.go:176] duration metric: took 6.568064746s to LocalClient.Create
	I1209 05:35:16.103005 1404644 start.go:167] duration metric: took 6.568183488s to libmachine.API.Create "no-preload-842269"
	I1209 05:35:16.103038 1404644 start.go:293] postStartSetup for "no-preload-842269" (driver="docker")
	I1209 05:35:16.103070 1404644 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 05:35:16.103149 1404644 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 05:35:16.103223 1404644 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-842269
	I1209 05:35:16.127075 1404644 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34185 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/no-preload-842269/id_rsa Username:docker}
	I1209 05:35:16.236882 1404644 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 05:35:16.240802 1404644 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1209 05:35:16.240826 1404644 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1209 05:35:16.240837 1404644 filesync.go:126] Scanning /home/jenkins/minikube-integration/22081-1142328/.minikube/addons for local assets ...
	I1209 05:35:16.240894 1404644 filesync.go:126] Scanning /home/jenkins/minikube-integration/22081-1142328/.minikube/files for local assets ...
	I1209 05:35:16.240968 1404644 filesync.go:149] local asset: /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/ssl/certs/11442312.pem -> 11442312.pem in /etc/ssl/certs
	I1209 05:35:16.241067 1404644 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 05:35:16.249178 1404644 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/ssl/certs/11442312.pem --> /etc/ssl/certs/11442312.pem (1708 bytes)
	I1209 05:35:16.268562 1404644 start.go:296] duration metric: took 165.490388ms for postStartSetup
	I1209 05:35:16.268978 1404644 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-842269
	I1209 05:35:16.288085 1404644 profile.go:143] Saving config to /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/no-preload-842269/config.json ...
	I1209 05:35:16.288366 1404644 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1209 05:35:16.288407 1404644 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-842269
	I1209 05:35:16.315970 1404644 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34185 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/no-preload-842269/id_rsa Username:docker}
	I1209 05:35:16.425730 1404644 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1209 05:35:16.431687 1404644 start.go:128] duration metric: took 6.900811686s to createHost
	I1209 05:35:16.431717 1404644 start.go:83] releasing machines lock for "no-preload-842269", held for 6.90109112s
	I1209 05:35:16.431811 1404644 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-842269
	I1209 05:35:16.452791 1404644 ssh_runner.go:195] Run: cat /version.json
	I1209 05:35:16.452838 1404644 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-842269
	I1209 05:35:16.453094 1404644 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 05:35:16.453167 1404644 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-842269
	I1209 05:35:16.491886 1404644 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34185 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/no-preload-842269/id_rsa Username:docker}
	I1209 05:35:16.496354 1404644 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34185 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/no-preload-842269/id_rsa Username:docker}
	I1209 05:35:16.616148 1404644 ssh_runner.go:195] Run: systemctl --version
	I1209 05:35:16.717625 1404644 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 05:35:16.727793 1404644 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 05:35:16.727868 1404644 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 05:35:16.768981 1404644 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1209 05:35:16.769058 1404644 start.go:496] detecting cgroup driver to use...
	I1209 05:35:16.769122 1404644 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1209 05:35:16.769203 1404644 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1209 05:35:16.785885 1404644 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1209 05:35:16.800197 1404644 docker.go:218] disabling cri-docker service (if available) ...
	I1209 05:35:16.800260 1404644 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 05:35:16.818594 1404644 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 05:35:16.837224 1404644 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 05:35:16.980430 1404644 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 05:35:17.138410 1404644 docker.go:234] disabling docker service ...
	I1209 05:35:17.138476 1404644 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 05:35:17.165281 1404644 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 05:35:17.180566 1404644 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 05:35:17.329328 1404644 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 05:35:17.491786 1404644 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 05:35:17.510170 1404644 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 05:35:17.543598 1404644 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1209 05:35:17.558928 1404644 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1209 05:35:17.568357 1404644 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1209 05:35:17.568429 1404644 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1209 05:35:17.577358 1404644 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1209 05:35:17.586295 1404644 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1209 05:35:17.595186 1404644 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1209 05:35:17.604100 1404644 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 05:35:17.612255 1404644 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1209 05:35:17.621113 1404644 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1209 05:35:17.629848 1404644 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1209 05:35:17.638646 1404644 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 05:35:17.646875 1404644 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 05:35:17.654661 1404644 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 05:35:17.797261 1404644 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1209 05:35:17.924523 1404644 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1209 05:35:17.924645 1404644 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1209 05:35:17.928913 1404644 start.go:564] Will wait 60s for crictl version
	I1209 05:35:17.929023 1404644 ssh_runner.go:195] Run: which crictl
	I1209 05:35:17.940943 1404644 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1209 05:35:17.987334 1404644 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1209 05:35:17.987459 1404644 ssh_runner.go:195] Run: containerd --version
	I1209 05:35:18.027084 1404644 ssh_runner.go:195] Run: containerd --version
	I1209 05:35:18.057426 1404644 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1209 05:35:18.060615 1404644 cli_runner.go:164] Run: docker network inspect no-preload-842269 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1209 05:35:18.081042 1404644 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1209 05:35:18.086258 1404644 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 05:35:18.098893 1404644 kubeadm.go:884] updating cluster {Name:no-preload-842269 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-842269 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 05:35:18.099041 1404644 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1209 05:35:18.099109 1404644 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 05:35:18.141151 1404644 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-beta.0". assuming images are not preloaded.
	I1209 05:35:18.141180 1404644 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.35.0-beta.0 registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 registry.k8s.io/kube-scheduler:v1.35.0-beta.0 registry.k8s.io/kube-proxy:v1.35.0-beta.0 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.5-0 registry.k8s.io/coredns/coredns:v1.13.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1209 05:35:18.141230 1404644 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 05:35:18.141475 1404644 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1209 05:35:18.141601 1404644 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1209 05:35:18.141700 1404644 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1209 05:35:18.141821 1404644 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1209 05:35:18.141952 1404644 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1209 05:35:18.142060 1404644 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.5-0
	I1209 05:35:18.142182 1404644 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1209 05:35:18.145982 1404644 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1209 05:35:18.146307 1404644 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1209 05:35:18.146529 1404644 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1209 05:35:18.146728 1404644 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1209 05:35:18.146950 1404644 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 05:35:18.147394 1404644 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1209 05:35:18.147627 1404644 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.5-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.5-0
	I1209 05:35:18.147946 1404644 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1209 05:35:18.444360 1404644 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" and sha "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4"
	I1209 05:35:18.444481 1404644 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1209 05:35:18.451793 1404644 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" and sha "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be"
	I1209 05:35:18.451927 1404644 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1209 05:35:18.458501 1404644 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-proxy:v1.35.0-beta.0" and sha "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904"
	I1209 05:35:18.458630 1404644 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1209 05:35:18.511112 1404644 containerd.go:267] Checking existence of image with name "registry.k8s.io/pause:3.10.1" and sha "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd"
	I1209 05:35:18.511207 1404644 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/pause:3.10.1
	I1209 05:35:18.521776 1404644 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" and sha "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b"
	I1209 05:35:18.521880 1404644 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1209 05:35:18.527135 1404644 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" does not exist at hash "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4" in container runtime
	I1209 05:35:18.527215 1404644 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1209 05:35:18.527285 1404644 ssh_runner.go:195] Run: which crictl
	I1209 05:35:18.531475 1404644 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" does not exist at hash "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be" in container runtime
	I1209 05:35:18.531538 1404644 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1209 05:35:18.531614 1404644 ssh_runner.go:195] Run: which crictl
	I1209 05:35:18.537164 1404644 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.35.0-beta.0" does not exist at hash "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904" in container runtime
	I1209 05:35:18.537213 1404644 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1209 05:35:18.537292 1404644 ssh_runner.go:195] Run: which crictl
	I1209 05:35:18.538162 1404644 containerd.go:267] Checking existence of image with name "registry.k8s.io/coredns/coredns:v1.13.1" and sha "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf"
	I1209 05:35:18.538246 1404644 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/coredns/coredns:v1.13.1
	I1209 05:35:18.588252 1404644 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
	I1209 05:35:18.588313 1404644 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1209 05:35:18.588382 1404644 ssh_runner.go:195] Run: which crictl
	I1209 05:35:18.592476 1404644 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" does not exist at hash "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b" in container runtime
	I1209 05:35:18.592514 1404644 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1209 05:35:18.592592 1404644 ssh_runner.go:195] Run: which crictl
	I1209 05:35:18.592594 1404644 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1209 05:35:18.592680 1404644 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1209 05:35:18.592744 1404644 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1209 05:35:18.600694 1404644 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.13.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.13.1" does not exist at hash "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf" in container runtime
	I1209 05:35:18.600746 1404644 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.13.1
	I1209 05:35:18.600829 1404644 ssh_runner.go:195] Run: which crictl
	I1209 05:35:18.625260 1404644 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1209 05:35:18.627959 1404644 containerd.go:267] Checking existence of image with name "registry.k8s.io/etcd:3.6.5-0" and sha "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42"
	I1209 05:35:18.628044 1404644 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/etcd:3.6.5-0
	I1209 05:35:18.726617 1404644 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1209 05:35:18.726718 1404644 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1209 05:35:18.726797 1404644 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1209 05:35:18.726883 1404644 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1209 05:35:18.727004 1404644 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1209 05:35:18.736839 1404644 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1209 05:35:18.763585 1404644 cache_images.go:118] "registry.k8s.io/etcd:3.6.5-0" needs transfer: "registry.k8s.io/etcd:3.6.5-0" does not exist at hash "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42" in container runtime
	I1209 05:35:18.763633 1404644 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.5-0
	I1209 05:35:18.763705 1404644 ssh_runner.go:195] Run: which crictl
	I1209 05:35:18.845735 1404644 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1209 05:35:18.845868 1404644 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1209 05:35:18.845963 1404644 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1209 05:35:18.846032 1404644 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1209 05:35:18.855976 1404644 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1209 05:35:18.856113 1404644 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1209 05:35:18.856177 1404644 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1209 05:35:18.974712 1404644 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0
	I1209 05:35:18.974865 1404644 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1209 05:35:18.974965 1404644 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0
	I1209 05:35:18.975078 1404644 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1209 05:35:18.975183 1404644 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1209 05:35:18.975264 1404644 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0
	I1209 05:35:18.975344 1404644 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1209 05:35:18.975425 1404644 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1209 05:35:18.975494 1404644 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1209 05:35:18.975628 1404644 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1209 05:35:18.975814 1404644 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1209 05:35:18.981424 1404644 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.35.0-beta.0': No such file or directory
	I1209 05:35:18.981517 1404644 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0 (22432256 bytes)
	I1209 05:35:19.087816 1404644 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1
	I1209 05:35:19.087863 1404644 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0': No such file or directory
	I1209 05:35:19.087887 1404644 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0 (24689152 bytes)
	I1209 05:35:19.087934 1404644 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1
	I1209 05:35:19.087937 1404644 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1209 05:35:19.087947 1404644 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
	I1209 05:35:19.087997 1404644 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1209 05:35:19.087825 1404644 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0
	I1209 05:35:19.088091 1404644 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0': No such file or directory
	I1209 05:35:19.088210 1404644 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0 (20672000 bytes)
	I1209 05:35:19.088283 1404644 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1209 05:35:19.140726 1404644 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0': No such file or directory
	I1209 05:35:19.140773 1404644 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0 (15401984 bytes)
	I1209 05:35:19.190102 1404644 containerd.go:285] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1209 05:35:19.190168 1404644 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.10.1
	I1209 05:35:19.191987 1404644 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.13.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.13.1': No such file or directory
	I1209 05:35:19.192116 1404644 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 --> /var/lib/minikube/images/coredns_v1.13.1 (21178368 bytes)
	I1209 05:35:19.192223 1404644 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0
	I1209 05:35:19.192353 1404644 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0
	W1209 05:35:19.374054 1404644 image.go:328] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1209 05:35:19.374294 1404644 containerd.go:267] Checking existence of image with name "gcr.io/k8s-minikube/storage-provisioner:v5" and sha "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51"
	I1209 05:35:19.374379 1404644 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 05:35:19.573223 1404644 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.5-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.5-0': No such file or directory
	I1209 05:35:19.573346 1404644 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 --> /var/lib/minikube/images/etcd_3.6.5-0 (21148160 bytes)
	I1209 05:35:19.573845 1404644 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 from cache
	I1209 05:35:19.573947 1404644 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1209 05:35:19.574005 1404644 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 05:35:19.574130 1404644 ssh_runner.go:195] Run: which crictl
	I1209 05:35:19.674076 1404644 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 05:35:19.861104 1404644 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 05:35:20.039474 1404644 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 05:35:20.118754 1404644 containerd.go:285] Loading image: /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1209 05:35:20.119327 1404644 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1209 05:35:20.200215 1404644 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1209 05:35:20.200389 1404644 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1209 05:35:22.023029 1404644 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: (1.903640785s)
	I1209 05:35:22.023102 1404644 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 from cache
	I1209 05:35:22.023135 1404644 containerd.go:285] Loading image: /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1209 05:35:22.023234 1404644 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1209 05:35:22.023293 1404644 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.82287343s)
	I1209 05:35:22.023354 1404644 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1209 05:35:22.023371 1404644 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1209 05:35:23.825499 1404644 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: (1.802158642s)
	I1209 05:35:23.825528 1404644 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 from cache
	I1209 05:35:23.825545 1404644 containerd.go:285] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1209 05:35:23.825617 1404644 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1209 05:35:25.344432 1404644 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: (1.518790061s)
	I1209 05:35:25.344461 1404644 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 from cache
	I1209 05:35:25.344479 1404644 containerd.go:285] Loading image: /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1209 05:35:25.344536 1404644 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1209 05:35:27.130646 1404644 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: (1.786082864s)
	I1209 05:35:27.130675 1404644 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 from cache
	I1209 05:35:27.130691 1404644 containerd.go:285] Loading image: /var/lib/minikube/images/coredns_v1.13.1
	I1209 05:35:27.130736 1404644 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.13.1
	I1209 05:35:28.584374 1404644 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.13.1: (1.453617427s)
	I1209 05:35:28.584402 1404644 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 from cache
	I1209 05:35:28.584419 1404644 containerd.go:285] Loading image: /var/lib/minikube/images/etcd_3.6.5-0
	I1209 05:35:28.584465 1404644 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.5-0
	I1209 05:35:30.390609 1404644 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.5-0: (1.806117506s)
	I1209 05:35:30.390633 1404644 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 from cache
	I1209 05:35:30.390651 1404644 containerd.go:285] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1209 05:35:30.390698 1404644 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I1209 05:35:30.959305 1404644 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1209 05:35:30.959337 1404644 cache_images.go:125] Successfully loaded all cached images
	I1209 05:35:30.959342 1404644 cache_images.go:94] duration metric: took 12.818145514s to LoadCachedImages
	I1209 05:35:30.959354 1404644 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1209 05:35:30.959440 1404644 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-842269 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-842269 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 05:35:30.959500 1404644 ssh_runner.go:195] Run: sudo crictl info
	I1209 05:35:30.986513 1404644 cni.go:84] Creating CNI manager for ""
	I1209 05:35:30.986535 1404644 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1209 05:35:30.986555 1404644 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1209 05:35:30.986577 1404644 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-842269 NodeName:no-preload-842269 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1209 05:35:30.986691 1404644 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-842269"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 05:35:30.986763 1404644 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1209 05:35:30.994616 1404644 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.35.0-beta.0': No such file or directory
	
	Initiating transfer...
	I1209 05:35:30.994682 1404644 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.35.0-beta.0
	I1209 05:35:31.002269 1404644 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubectl.sha256
	I1209 05:35:31.002361 1404644 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl
	I1209 05:35:31.003101 1404644 download.go:108] Downloading: https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubelet.sha256 -> /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubelet
	I1209 05:35:31.003101 1404644 download.go:108] Downloading: https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm.sha256 -> /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubeadm
	I1209 05:35:31.007992 1404644 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubectl': No such file or directory
	I1209 05:35:31.008043 1404644 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubectl --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl (55181496 bytes)
	I1209 05:35:31.854229 1404644 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 05:35:31.871841 1404644 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet
	I1209 05:35:31.877563 1404644 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet': No such file or directory
	I1209 05:35:31.877595 1404644 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubelet --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet (54329636 bytes)
	I1209 05:35:31.926753 1404644 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm
	I1209 05:35:31.931409 1404644 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm': No such file or directory
	I1209 05:35:31.931442 1404644 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubeadm --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm (68354232 bytes)
	I1209 05:35:32.569137 1404644 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1209 05:35:32.576873 1404644 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1209 05:35:32.589981 1404644 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1209 05:35:32.603680 1404644 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1209 05:35:32.617619 1404644 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1209 05:35:32.621431 1404644 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 05:35:32.631542 1404644 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 05:35:32.746803 1404644 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 05:35:32.777319 1404644 certs.go:69] Setting up /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/no-preload-842269 for IP: 192.168.85.2
	I1209 05:35:32.777342 1404644 certs.go:195] generating shared ca certs ...
	I1209 05:35:32.777359 1404644 certs.go:227] acquiring lock for ca certs: {Name:mk15788702f8c4e23b5aeab3f44961d296fab259 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 05:35:32.777496 1404644 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.key
	I1209 05:35:32.777545 1404644 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/proxy-client-ca.key
	I1209 05:35:32.777557 1404644 certs.go:257] generating profile certs ...
	I1209 05:35:32.777612 1404644 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/no-preload-842269/client.key
	I1209 05:35:32.777627 1404644 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/no-preload-842269/client.crt with IP's: []
	I1209 05:35:33.162281 1404644 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/no-preload-842269/client.crt ...
	I1209 05:35:33.162317 1404644 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/no-preload-842269/client.crt: {Name:mkd79e34d1fd635a2715b2fec0ffbb95d75a41a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 05:35:33.162513 1404644 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/no-preload-842269/client.key ...
	I1209 05:35:33.162527 1404644 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/no-preload-842269/client.key: {Name:mk5819922e9f1f0e9e9f6e801582a3cf96576b0d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 05:35:33.162621 1404644 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/no-preload-842269/apiserver.key.135a6aab
	I1209 05:35:33.162639 1404644 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/no-preload-842269/apiserver.crt.135a6aab with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1209 05:35:33.729087 1404644 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/no-preload-842269/apiserver.crt.135a6aab ...
	I1209 05:35:33.729159 1404644 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/no-preload-842269/apiserver.crt.135a6aab: {Name:mk0b1d2fcb667e285546fd82e1004c7ba850ae83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 05:35:33.729369 1404644 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/no-preload-842269/apiserver.key.135a6aab ...
	I1209 05:35:33.729403 1404644 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/no-preload-842269/apiserver.key.135a6aab: {Name:mk870fb50987016f41ae680dc57041b28a38a3e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 05:35:33.729528 1404644 certs.go:382] copying /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/no-preload-842269/apiserver.crt.135a6aab -> /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/no-preload-842269/apiserver.crt
	I1209 05:35:33.729643 1404644 certs.go:386] copying /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/no-preload-842269/apiserver.key.135a6aab -> /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/no-preload-842269/apiserver.key
	I1209 05:35:33.729757 1404644 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/no-preload-842269/proxy-client.key
	I1209 05:35:33.729793 1404644 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/no-preload-842269/proxy-client.crt with IP's: []
	I1209 05:35:34.419822 1404644 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/no-preload-842269/proxy-client.crt ...
	I1209 05:35:34.419908 1404644 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/no-preload-842269/proxy-client.crt: {Name:mkabe7b9a8c59673ad7856440daf1f060d2cba03 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 05:35:34.420187 1404644 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/no-preload-842269/proxy-client.key ...
	I1209 05:35:34.420235 1404644 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/no-preload-842269/proxy-client.key: {Name:mk416143f8a35cc4e2e07a4624d08b2458dffad5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 05:35:34.420483 1404644 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/1144231.pem (1338 bytes)
	W1209 05:35:34.420561 1404644 certs.go:480] ignoring /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/1144231_empty.pem, impossibly tiny 0 bytes
	I1209 05:35:34.420599 1404644 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca-key.pem (1679 bytes)
	I1209 05:35:34.420654 1404644 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem (1078 bytes)
	I1209 05:35:34.420721 1404644 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/cert.pem (1123 bytes)
	I1209 05:35:34.420772 1404644 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/key.pem (1675 bytes)
	I1209 05:35:34.420864 1404644 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/ssl/certs/11442312.pem (1708 bytes)
	I1209 05:35:34.421500 1404644 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 05:35:34.489923 1404644 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1209 05:35:34.531071 1404644 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 05:35:34.571054 1404644 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1209 05:35:34.595158 1404644 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/no-preload-842269/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1209 05:35:34.628631 1404644 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/no-preload-842269/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1209 05:35:34.660884 1404644 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/no-preload-842269/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 05:35:34.689821 1404644 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/no-preload-842269/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1209 05:35:34.719521 1404644 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 05:35:34.755239 1404644 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/1144231.pem --> /usr/share/ca-certificates/1144231.pem (1338 bytes)
	I1209 05:35:34.791865 1404644 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/ssl/certs/11442312.pem --> /usr/share/ca-certificates/11442312.pem (1708 bytes)
	I1209 05:35:34.814239 1404644 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 05:35:34.837347 1404644 ssh_runner.go:195] Run: openssl version
	I1209 05:35:34.849189 1404644 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1209 05:35:34.857370 1404644 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1209 05:35:34.866905 1404644 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 05:35:34.871150 1404644 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 04:09 /usr/share/ca-certificates/minikubeCA.pem
	I1209 05:35:34.871215 1404644 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 05:35:34.920992 1404644 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1209 05:35:34.929051 1404644 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1209 05:35:34.936861 1404644 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1144231.pem
	I1209 05:35:34.944408 1404644 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1144231.pem /etc/ssl/certs/1144231.pem
	I1209 05:35:34.952465 1404644 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1144231.pem
	I1209 05:35:34.959780 1404644 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 04:18 /usr/share/ca-certificates/1144231.pem
	I1209 05:35:34.959852 1404644 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1144231.pem
	I1209 05:35:35.016661 1404644 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1209 05:35:35.030414 1404644 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/1144231.pem /etc/ssl/certs/51391683.0
	I1209 05:35:35.042638 1404644 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/11442312.pem
	I1209 05:35:35.055147 1404644 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/11442312.pem /etc/ssl/certs/11442312.pem
	I1209 05:35:35.063656 1404644 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11442312.pem
	I1209 05:35:35.067472 1404644 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 04:18 /usr/share/ca-certificates/11442312.pem
	I1209 05:35:35.067623 1404644 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11442312.pem
	I1209 05:35:35.133043 1404644 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1209 05:35:35.140688 1404644 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/11442312.pem /etc/ssl/certs/3ec20f2e.0
	I1209 05:35:35.148449 1404644 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 05:35:35.153969 1404644 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1209 05:35:35.154021 1404644 kubeadm.go:401] StartCluster: {Name:no-preload-842269 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-842269 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 05:35:35.154274 1404644 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1209 05:35:35.154343 1404644 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 05:35:35.216799 1404644 cri.go:89] found id: ""
	I1209 05:35:35.216876 1404644 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1209 05:35:35.228196 1404644 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 05:35:35.254211 1404644 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1209 05:35:35.254276 1404644 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 05:35:35.271912 1404644 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 05:35:35.271979 1404644 kubeadm.go:158] found existing configuration files:
	
	I1209 05:35:35.272076 1404644 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1209 05:35:35.301279 1404644 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 05:35:35.301394 1404644 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 05:35:35.322272 1404644 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1209 05:35:35.333182 1404644 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 05:35:35.333248 1404644 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 05:35:35.345140 1404644 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1209 05:35:35.362482 1404644 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 05:35:35.362548 1404644 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 05:35:35.372888 1404644 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1209 05:35:35.386824 1404644 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 05:35:35.386893 1404644 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 05:35:35.399830 1404644 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1209 05:35:35.458591 1404644 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1209 05:35:35.458941 1404644 kubeadm.go:319] [preflight] Running pre-flight checks
	I1209 05:35:35.577700 1404644 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1209 05:35:35.577772 1404644 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1209 05:35:35.577808 1404644 kubeadm.go:319] OS: Linux
	I1209 05:35:35.577853 1404644 kubeadm.go:319] CGROUPS_CPU: enabled
	I1209 05:35:35.577901 1404644 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1209 05:35:35.577948 1404644 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1209 05:35:35.577997 1404644 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1209 05:35:35.578045 1404644 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1209 05:35:35.578094 1404644 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1209 05:35:35.578139 1404644 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1209 05:35:35.578194 1404644 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1209 05:35:35.578244 1404644 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1209 05:35:35.694013 1404644 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1209 05:35:35.694127 1404644 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1209 05:35:35.694222 1404644 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1209 05:35:35.699998 1404644 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1209 05:35:35.720434 1404644 out.go:252]   - Generating certificates and keys ...
	I1209 05:35:35.720548 1404644 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1209 05:35:35.720622 1404644 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1209 05:35:35.947258 1404644 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1209 05:35:36.093795 1404644 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1209 05:35:36.147087 1404644 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1209 05:35:36.552256 1404644 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1209 05:35:37.123897 1404644 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1209 05:35:37.124081 1404644 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-842269] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1209 05:35:37.517718 1404644 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1209 05:35:37.517999 1404644 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-842269] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1209 05:35:37.713269 1404644 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1209 05:35:38.092122 1404644 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1209 05:35:38.137762 1404644 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1209 05:35:38.138006 1404644 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1209 05:35:38.266567 1404644 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1209 05:35:39.245637 1404644 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1209 05:35:39.640702 1404644 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1209 05:35:39.986852 1404644 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1209 05:35:40.257808 1404644 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1209 05:35:40.258512 1404644 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1209 05:35:40.263013 1404644 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1209 05:35:40.266456 1404644 out.go:252]   - Booting up control plane ...
	I1209 05:35:40.266572 1404644 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1209 05:35:40.266657 1404644 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1209 05:35:40.267990 1404644 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1209 05:35:40.288010 1404644 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1209 05:35:40.288179 1404644 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1209 05:35:40.297093 1404644 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1209 05:35:40.297706 1404644 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1209 05:35:40.297950 1404644 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1209 05:35:40.442358 1404644 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1209 05:35:40.443090 1404644 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1209 05:39:40.443950 1404644 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001080214s
	I1209 05:39:40.443976 1404644 kubeadm.go:319] 
	I1209 05:39:40.444045 1404644 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1209 05:39:40.444079 1404644 kubeadm.go:319] 	- The kubelet is not running
	I1209 05:39:40.444192 1404644 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1209 05:39:40.444198 1404644 kubeadm.go:319] 
	I1209 05:39:40.444302 1404644 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1209 05:39:40.444334 1404644 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1209 05:39:40.444365 1404644 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1209 05:39:40.444369 1404644 kubeadm.go:319] 
	I1209 05:39:40.454378 1404644 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1209 05:39:40.454892 1404644 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1209 05:39:40.455030 1404644 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1209 05:39:40.455334 1404644 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1209 05:39:40.455364 1404644 kubeadm.go:319] 
	I1209 05:39:40.455454 1404644 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1209 05:39:40.455579 1404644 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost no-preload-842269] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-842269] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001080214s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost no-preload-842269] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-842269] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001080214s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1209 05:39:40.455709 1404644 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1209 05:39:40.955876 1404644 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 05:39:40.978656 1404644 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1209 05:39:40.978767 1404644 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 05:39:40.993301 1404644 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 05:39:40.993367 1404644 kubeadm.go:158] found existing configuration files:
	
	I1209 05:39:40.993430 1404644 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1209 05:39:41.010537 1404644 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 05:39:41.010632 1404644 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 05:39:41.021439 1404644 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1209 05:39:41.034778 1404644 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 05:39:41.034858 1404644 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 05:39:41.045950 1404644 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1209 05:39:41.058456 1404644 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 05:39:41.058534 1404644 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 05:39:41.070604 1404644 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1209 05:39:41.084995 1404644 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 05:39:41.085065 1404644 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 05:39:41.093616 1404644 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1209 05:39:41.139759 1404644 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1209 05:39:41.140043 1404644 kubeadm.go:319] [preflight] Running pre-flight checks
	I1209 05:39:41.250321 1404644 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1209 05:39:41.250430 1404644 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1209 05:39:41.250479 1404644 kubeadm.go:319] OS: Linux
	I1209 05:39:41.250539 1404644 kubeadm.go:319] CGROUPS_CPU: enabled
	I1209 05:39:41.250597 1404644 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1209 05:39:41.250659 1404644 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1209 05:39:41.250720 1404644 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1209 05:39:41.250781 1404644 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1209 05:39:41.250847 1404644 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1209 05:39:41.250912 1404644 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1209 05:39:41.250974 1404644 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1209 05:39:41.251029 1404644 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1209 05:39:41.328401 1404644 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1209 05:39:41.328524 1404644 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1209 05:39:41.328621 1404644 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1209 05:39:41.336403 1404644 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1209 05:39:41.342138 1404644 out.go:252]   - Generating certificates and keys ...
	I1209 05:39:41.342239 1404644 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1209 05:39:41.342335 1404644 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1209 05:39:41.342438 1404644 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1209 05:39:41.342510 1404644 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1209 05:39:41.342588 1404644 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1209 05:39:41.342653 1404644 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1209 05:39:41.342725 1404644 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1209 05:39:41.342803 1404644 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1209 05:39:41.342891 1404644 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1209 05:39:41.342974 1404644 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1209 05:39:41.343022 1404644 kubeadm.go:319] [certs] Using the existing "sa" key
	I1209 05:39:41.343087 1404644 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1209 05:39:41.637123 1404644 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1209 05:39:41.799509 1404644 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1209 05:39:42.134998 1404644 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1209 05:39:42.322322 1404644 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1209 05:39:42.671545 1404644 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1209 05:39:42.672561 1404644 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1209 05:39:42.675229 1404644 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1209 05:39:42.678331 1404644 out.go:252]   - Booting up control plane ...
	I1209 05:39:42.678441 1404644 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1209 05:39:42.678531 1404644 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1209 05:39:42.679166 1404644 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1209 05:39:42.700677 1404644 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1209 05:39:42.700860 1404644 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1209 05:39:42.710520 1404644 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1209 05:39:42.714450 1404644 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1209 05:39:42.714776 1404644 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1209 05:39:42.939646 1404644 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1209 05:39:42.939773 1404644 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1209 05:43:42.940465 1404644 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001103315s
	I1209 05:43:42.940494 1404644 kubeadm.go:319] 
	I1209 05:43:42.940552 1404644 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1209 05:43:42.940585 1404644 kubeadm.go:319] 	- The kubelet is not running
	I1209 05:43:42.940690 1404644 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1209 05:43:42.940694 1404644 kubeadm.go:319] 
	I1209 05:43:42.940799 1404644 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1209 05:43:42.940831 1404644 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1209 05:43:42.940862 1404644 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1209 05:43:42.940866 1404644 kubeadm.go:319] 
	I1209 05:43:42.944449 1404644 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1209 05:43:42.944876 1404644 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1209 05:43:42.944989 1404644 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1209 05:43:42.945227 1404644 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1209 05:43:42.945235 1404644 kubeadm.go:319] 
	I1209 05:43:42.945305 1404644 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1209 05:43:42.945358 1404644 kubeadm.go:403] duration metric: took 8m7.791342576s to StartCluster
	I1209 05:43:42.945399 1404644 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:43:42.945466 1404644 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:43:42.969310 1404644 cri.go:89] found id: ""
	I1209 05:43:42.969335 1404644 logs.go:282] 0 containers: []
	W1209 05:43:42.969343 1404644 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:43:42.969349 1404644 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:43:42.969414 1404644 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:43:42.997525 1404644 cri.go:89] found id: ""
	I1209 05:43:42.997547 1404644 logs.go:282] 0 containers: []
	W1209 05:43:42.997556 1404644 logs.go:284] No container was found matching "etcd"
	I1209 05:43:42.997562 1404644 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:43:42.997619 1404644 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:43:43.022335 1404644 cri.go:89] found id: ""
	I1209 05:43:43.022360 1404644 logs.go:282] 0 containers: []
	W1209 05:43:43.022369 1404644 logs.go:284] No container was found matching "coredns"
	I1209 05:43:43.022380 1404644 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:43:43.022440 1404644 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:43:43.046700 1404644 cri.go:89] found id: ""
	I1209 05:43:43.046725 1404644 logs.go:282] 0 containers: []
	W1209 05:43:43.046734 1404644 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:43:43.046739 1404644 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:43:43.046797 1404644 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:43:43.071875 1404644 cri.go:89] found id: ""
	I1209 05:43:43.071906 1404644 logs.go:282] 0 containers: []
	W1209 05:43:43.071915 1404644 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:43:43.071921 1404644 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:43:43.071986 1404644 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:43:43.096153 1404644 cri.go:89] found id: ""
	I1209 05:43:43.096176 1404644 logs.go:282] 0 containers: []
	W1209 05:43:43.096190 1404644 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:43:43.096198 1404644 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:43:43.096259 1404644 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:43:43.121898 1404644 cri.go:89] found id: ""
	I1209 05:43:43.121922 1404644 logs.go:282] 0 containers: []
	W1209 05:43:43.121931 1404644 logs.go:284] No container was found matching "kindnet"
	I1209 05:43:43.121940 1404644 logs.go:123] Gathering logs for containerd ...
	I1209 05:43:43.121951 1404644 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:43:43.163306 1404644 logs.go:123] Gathering logs for container status ...
	I1209 05:43:43.163339 1404644 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:43:43.207532 1404644 logs.go:123] Gathering logs for kubelet ...
	I1209 05:43:43.207567 1404644 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:43:43.277243 1404644 logs.go:123] Gathering logs for dmesg ...
	I1209 05:43:43.277279 1404644 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:43:43.298477 1404644 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:43:43.298507 1404644 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:43:43.365347 1404644 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:43:43.357461    5461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:43:43.358090    5461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:43:43.359781    5461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:43:43.360115    5461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:43:43.361539    5461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:43:43.357461    5461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:43:43.358090    5461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:43:43.359781    5461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:43:43.360115    5461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:43:43.361539    5461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1209 05:43:43.365381 1404644 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001103315s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1209 05:43:43.365413 1404644 out.go:285] * 
	* 
	W1209 05:43:43.365475 1404644 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001103315s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001103315s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1209 05:43:43.365493 1404644 out.go:285] * 
	* 
	W1209 05:43:43.367868 1404644 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 05:43:43.374302 1404644 out.go:203] 
	W1209 05:43:43.377044 1404644 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001103315s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001103315s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1209 05:43:43.377091 1404644 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1209 05:43:43.377112 1404644 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1209 05:43:43.380387 1404644 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:186: failed starting minikube -first start-. args "out/minikube-linux-arm64 start -p no-preload-842269 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/FirstStart]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/FirstStart]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-842269
helpers_test.go:243: (dbg) docker inspect no-preload-842269:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9789b34a5453b154c4ceca4f0038c1d7948d7f0f72f334a114a8452d803ad415",
	        "Created": "2025-12-09T05:35:10.617601088Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1404960,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-09T05:35:10.694361506Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:e4eb91ed18a24161fce60c7cdd660144ecd5b8c5029dc2dea2c5e423c2f48ce4",
	        "ResolvConfPath": "/var/lib/docker/containers/9789b34a5453b154c4ceca4f0038c1d7948d7f0f72f334a114a8452d803ad415/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9789b34a5453b154c4ceca4f0038c1d7948d7f0f72f334a114a8452d803ad415/hostname",
	        "HostsPath": "/var/lib/docker/containers/9789b34a5453b154c4ceca4f0038c1d7948d7f0f72f334a114a8452d803ad415/hosts",
	        "LogPath": "/var/lib/docker/containers/9789b34a5453b154c4ceca4f0038c1d7948d7f0f72f334a114a8452d803ad415/9789b34a5453b154c4ceca4f0038c1d7948d7f0f72f334a114a8452d803ad415-json.log",
	        "Name": "/no-preload-842269",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-842269:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-842269",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "9789b34a5453b154c4ceca4f0038c1d7948d7f0f72f334a114a8452d803ad415",
	                "LowerDir": "/var/lib/docker/overlay2/658f95b1f330fcb0d4774e9611c50218d409d0a889583e0050325b4fe479e9f7-init/diff:/var/lib/docker/overlay2/c44bb57aa59cc265266f37f2bb6e7ec0e7d641c3b4aeaa57e6d23deec6f0d1d4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/658f95b1f330fcb0d4774e9611c50218d409d0a889583e0050325b4fe479e9f7/merged",
	                "UpperDir": "/var/lib/docker/overlay2/658f95b1f330fcb0d4774e9611c50218d409d0a889583e0050325b4fe479e9f7/diff",
	                "WorkDir": "/var/lib/docker/overlay2/658f95b1f330fcb0d4774e9611c50218d409d0a889583e0050325b4fe479e9f7/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-842269",
	                "Source": "/var/lib/docker/volumes/no-preload-842269/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-842269",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-842269",
	                "name.minikube.sigs.k8s.io": "no-preload-842269",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c8d638bf0ac3f8de516cba00d80a3b149af62367900ced69943b89e3e7924db8",
	            "SandboxKey": "/var/run/docker/netns/c8d638bf0ac3",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34185"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34186"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34189"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34187"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34188"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-842269": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "4e:5c:05:82:25:f0",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6461bd7226e5723487f325bf78054dc63f1dafa2831abe7b44a8cc288dfa4456",
	                    "EndpointID": "5bccd85f7c02ee9bc4903397b85755d423fd035b5d120846d74ca8550b415301",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-842269",
	                        "9789b34a5453"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-842269 -n no-preload-842269
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-842269 -n no-preload-842269: exit status 6 (331.1373ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1209 05:43:43.803501 1426856 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-842269" does not appear in /home/jenkins/minikube-integration/22081-1142328/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/FirstStart FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/FirstStart]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-842269 logs -n 25
helpers_test.go:260: TestStartStop/group/no-preload/serial/FirstStart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─
────────────────────┐
	│ COMMAND │                                                                                                                            ARGS                                                                                                                            │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─
────────────────────┤
	│ start   │ -p embed-certs-432108 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-432108           │ jenkins │ v1.37.0 │ 09 Dec 25 05:34 UTC │ 09 Dec 25 05:36 UTC │
	│ start   │ -p cert-expiration-074045 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd                                                                                                                                            │ cert-expiration-074045       │ jenkins │ v1.37.0 │ 09 Dec 25 05:34 UTC │ 09 Dec 25 05:35 UTC │
	│ delete  │ -p cert-expiration-074045                                                                                                                                                                                                                                  │ cert-expiration-074045       │ jenkins │ v1.37.0 │ 09 Dec 25 05:35 UTC │ 09 Dec 25 05:35 UTC │
	│ delete  │ -p disable-driver-mounts-094940                                                                                                                                                                                                                            │ disable-driver-mounts-094940 │ jenkins │ v1.37.0 │ 09 Dec 25 05:35 UTC │ 09 Dec 25 05:35 UTC │
	│ start   │ -p no-preload-842269 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-842269            │ jenkins │ v1.37.0 │ 09 Dec 25 05:35 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-432108 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                   │ embed-certs-432108           │ jenkins │ v1.37.0 │ 09 Dec 25 05:36 UTC │ 09 Dec 25 05:36 UTC │
	│ stop    │ -p embed-certs-432108 --alsologtostderr -v=3                                                                                                                                                                                                               │ embed-certs-432108           │ jenkins │ v1.37.0 │ 09 Dec 25 05:36 UTC │ 09 Dec 25 05:36 UTC │
	│ addons  │ enable dashboard -p embed-certs-432108 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                              │ embed-certs-432108           │ jenkins │ v1.37.0 │ 09 Dec 25 05:36 UTC │ 09 Dec 25 05:36 UTC │
	│ start   │ -p embed-certs-432108 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-432108           │ jenkins │ v1.37.0 │ 09 Dec 25 05:36 UTC │ 09 Dec 25 05:37 UTC │
	│ image   │ embed-certs-432108 image list --format=json                                                                                                                                                                                                                │ embed-certs-432108           │ jenkins │ v1.37.0 │ 09 Dec 25 05:37 UTC │ 09 Dec 25 05:37 UTC │
	│ pause   │ -p embed-certs-432108 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-432108           │ jenkins │ v1.37.0 │ 09 Dec 25 05:37 UTC │ 09 Dec 25 05:37 UTC │
	│ unpause │ -p embed-certs-432108 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-432108           │ jenkins │ v1.37.0 │ 09 Dec 25 05:37 UTC │ 09 Dec 25 05:37 UTC │
	│ delete  │ -p embed-certs-432108                                                                                                                                                                                                                                      │ embed-certs-432108           │ jenkins │ v1.37.0 │ 09 Dec 25 05:37 UTC │ 09 Dec 25 05:37 UTC │
	│ delete  │ -p embed-certs-432108                                                                                                                                                                                                                                      │ embed-certs-432108           │ jenkins │ v1.37.0 │ 09 Dec 25 05:37 UTC │ 09 Dec 25 05:37 UTC │
	│ start   │ -p default-k8s-diff-port-564611 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-564611 │ jenkins │ v1.37.0 │ 09 Dec 25 05:37 UTC │ 09 Dec 25 05:39 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-564611 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                         │ default-k8s-diff-port-564611 │ jenkins │ v1.37.0 │ 09 Dec 25 05:39 UTC │ 09 Dec 25 05:39 UTC │
	│ stop    │ -p default-k8s-diff-port-564611 --alsologtostderr -v=3                                                                                                                                                                                                     │ default-k8s-diff-port-564611 │ jenkins │ v1.37.0 │ 09 Dec 25 05:39 UTC │ 09 Dec 25 05:39 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-564611 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ default-k8s-diff-port-564611 │ jenkins │ v1.37.0 │ 09 Dec 25 05:39 UTC │ 09 Dec 25 05:39 UTC │
	│ start   │ -p default-k8s-diff-port-564611 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-564611 │ jenkins │ v1.37.0 │ 09 Dec 25 05:39 UTC │ 09 Dec 25 05:40 UTC │
	│ image   │ default-k8s-diff-port-564611 image list --format=json                                                                                                                                                                                                      │ default-k8s-diff-port-564611 │ jenkins │ v1.37.0 │ 09 Dec 25 05:40 UTC │ 09 Dec 25 05:40 UTC │
	│ pause   │ -p default-k8s-diff-port-564611 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-564611 │ jenkins │ v1.37.0 │ 09 Dec 25 05:40 UTC │ 09 Dec 25 05:40 UTC │
	│ unpause │ -p default-k8s-diff-port-564611 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-564611 │ jenkins │ v1.37.0 │ 09 Dec 25 05:40 UTC │ 09 Dec 25 05:40 UTC │
	│ delete  │ -p default-k8s-diff-port-564611                                                                                                                                                                                                                            │ default-k8s-diff-port-564611 │ jenkins │ v1.37.0 │ 09 Dec 25 05:40 UTC │ 09 Dec 25 05:40 UTC │
	│ delete  │ -p default-k8s-diff-port-564611                                                                                                                                                                                                                            │ default-k8s-diff-port-564611 │ jenkins │ v1.37.0 │ 09 Dec 25 05:40 UTC │ 09 Dec 25 05:40 UTC │
	│ start   │ -p newest-cni-262540 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-262540            │ jenkins │ v1.37.0 │ 09 Dec 25 05:40 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─
────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/09 05:40:41
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 05:40:41.014166 1422398 out.go:360] Setting OutFile to fd 1 ...
	I1209 05:40:41.014346 1422398 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 05:40:41.014376 1422398 out.go:374] Setting ErrFile to fd 2...
	I1209 05:40:41.014403 1422398 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 05:40:41.014777 1422398 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-1142328/.minikube/bin
	I1209 05:40:41.015346 1422398 out.go:368] Setting JSON to false
	I1209 05:40:41.016651 1422398 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":30164,"bootTime":1765228677,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1209 05:40:41.016752 1422398 start.go:143] virtualization:  
	I1209 05:40:41.020737 1422398 out.go:179] * [newest-cni-262540] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1209 05:40:41.025100 1422398 out.go:179]   - MINIKUBE_LOCATION=22081
	I1209 05:40:41.025177 1422398 notify.go:221] Checking for updates...
	I1209 05:40:41.031377 1422398 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 05:40:41.034527 1422398 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22081-1142328/kubeconfig
	I1209 05:40:41.037660 1422398 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-1142328/.minikube
	I1209 05:40:41.040646 1422398 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1209 05:40:41.043555 1422398 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 05:40:41.047098 1422398 config.go:182] Loaded profile config "no-preload-842269": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1209 05:40:41.047203 1422398 driver.go:422] Setting default libvirt URI to qemu:///system
	I1209 05:40:41.082759 1422398 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1209 05:40:41.082877 1422398 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 05:40:41.141221 1422398 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-09 05:40:41.131267754 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1209 05:40:41.141331 1422398 docker.go:319] overlay module found
	I1209 05:40:41.144673 1422398 out.go:179] * Using the docker driver based on user configuration
	I1209 05:40:41.147595 1422398 start.go:309] selected driver: docker
	I1209 05:40:41.147618 1422398 start.go:927] validating driver "docker" against <nil>
	I1209 05:40:41.147633 1422398 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 05:40:41.148480 1422398 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 05:40:41.205051 1422398 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-09 05:40:41.195808894 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1209 05:40:41.205216 1422398 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1209 05:40:41.205249 1422398 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1209 05:40:41.205488 1422398 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1209 05:40:41.208233 1422398 out.go:179] * Using Docker driver with root privileges
	I1209 05:40:41.211172 1422398 cni.go:84] Creating CNI manager for ""
	I1209 05:40:41.211250 1422398 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1209 05:40:41.211263 1422398 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1209 05:40:41.211347 1422398 start.go:353] cluster config:
	{Name:newest-cni-262540 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-262540 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 05:40:41.214410 1422398 out.go:179] * Starting "newest-cni-262540" primary control-plane node in "newest-cni-262540" cluster
	I1209 05:40:41.217388 1422398 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1209 05:40:41.220416 1422398 out.go:179] * Pulling base image v0.0.48-1765184860-22066 ...
	I1209 05:40:41.223240 1422398 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1209 05:40:41.223288 1422398 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1209 05:40:41.223310 1422398 cache.go:65] Caching tarball of preloaded images
	I1209 05:40:41.223322 1422398 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local docker daemon
	I1209 05:40:41.223405 1422398 preload.go:238] Found /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1209 05:40:41.223416 1422398 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1209 05:40:41.223520 1422398 profile.go:143] Saving config to /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/config.json ...
	I1209 05:40:41.223546 1422398 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/config.json: {Name:mk3f2f0447b25b9c02ca47937d45ed297c23b284 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 05:40:41.242533 1422398 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local docker daemon, skipping pull
	I1209 05:40:41.242556 1422398 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c exists in daemon, skipping load
	I1209 05:40:41.242574 1422398 cache.go:243] Successfully downloaded all kic artifacts
	I1209 05:40:41.242607 1422398 start.go:360] acquireMachinesLock for newest-cni-262540: {Name:mk272d84ff1bc8c8949f2f0b1f608a7519899d10 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 05:40:41.242722 1422398 start.go:364] duration metric: took 94.012µs to acquireMachinesLock for "newest-cni-262540"
	I1209 05:40:41.242752 1422398 start.go:93] Provisioning new machine with config: &{Name:newest-cni-262540 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-262540 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1209 05:40:41.242832 1422398 start.go:125] createHost starting for "" (driver="docker")
	I1209 05:40:41.246278 1422398 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1209 05:40:41.246513 1422398 start.go:159] libmachine.API.Create for "newest-cni-262540" (driver="docker")
	I1209 05:40:41.246549 1422398 client.go:173] LocalClient.Create starting
	I1209 05:40:41.246618 1422398 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem
	I1209 05:40:41.246653 1422398 main.go:143] libmachine: Decoding PEM data...
	I1209 05:40:41.246672 1422398 main.go:143] libmachine: Parsing certificate...
	I1209 05:40:41.246730 1422398 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/cert.pem
	I1209 05:40:41.246753 1422398 main.go:143] libmachine: Decoding PEM data...
	I1209 05:40:41.246765 1422398 main.go:143] libmachine: Parsing certificate...
	I1209 05:40:41.247138 1422398 cli_runner.go:164] Run: docker network inspect newest-cni-262540 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1209 05:40:41.262988 1422398 cli_runner.go:211] docker network inspect newest-cni-262540 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1209 05:40:41.263073 1422398 network_create.go:284] running [docker network inspect newest-cni-262540] to gather additional debugging logs...
	I1209 05:40:41.263095 1422398 cli_runner.go:164] Run: docker network inspect newest-cni-262540
	W1209 05:40:41.279120 1422398 cli_runner.go:211] docker network inspect newest-cni-262540 returned with exit code 1
	I1209 05:40:41.279154 1422398 network_create.go:287] error running [docker network inspect newest-cni-262540]: docker network inspect newest-cni-262540: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-262540 not found
	I1209 05:40:41.279168 1422398 network_create.go:289] output of [docker network inspect newest-cni-262540]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-262540 not found
	
	** /stderr **
	I1209 05:40:41.279286 1422398 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1209 05:40:41.295748 1422398 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-7a15eec16b1a IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:8a:b7:58:bc:12:6c} reservation:<nil>}
	I1209 05:40:41.296192 1422398 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-fcb9e6b38e8e IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:56:c3:7a:b4:06:4b} reservation:<nil>}
	I1209 05:40:41.296445 1422398 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-8c1346c67d6b IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:82:10:14:75:55:fb} reservation:<nil>}
	I1209 05:40:41.296875 1422398 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019e80f0}
	I1209 05:40:41.296895 1422398 network_create.go:124] attempt to create docker network newest-cni-262540 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1209 05:40:41.296949 1422398 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-262540 newest-cni-262540
	I1209 05:40:41.356493 1422398 network_create.go:108] docker network newest-cni-262540 192.168.76.0/24 created
	I1209 05:40:41.356525 1422398 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-262540" container
	I1209 05:40:41.356609 1422398 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1209 05:40:41.372493 1422398 cli_runner.go:164] Run: docker volume create newest-cni-262540 --label name.minikube.sigs.k8s.io=newest-cni-262540 --label created_by.minikube.sigs.k8s.io=true
	I1209 05:40:41.390479 1422398 oci.go:103] Successfully created a docker volume newest-cni-262540
	I1209 05:40:41.390571 1422398 cli_runner.go:164] Run: docker run --rm --name newest-cni-262540-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-262540 --entrypoint /usr/bin/test -v newest-cni-262540:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c -d /var/lib
	I1209 05:40:41.957365 1422398 oci.go:107] Successfully prepared a docker volume newest-cni-262540
	I1209 05:40:41.957440 1422398 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1209 05:40:41.957454 1422398 kic.go:194] Starting extracting preloaded images to volume ...
	I1209 05:40:41.957523 1422398 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-262540:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c -I lz4 -xf /preloaded.tar -C /extractDir
	I1209 05:40:46.577478 1422398 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-262540:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c -I lz4 -xf /preloaded.tar -C /extractDir: (4.619919939s)
	I1209 05:40:46.577511 1422398 kic.go:203] duration metric: took 4.620053703s to extract preloaded images to volume ...
	W1209 05:40:46.577655 1422398 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1209 05:40:46.577765 1422398 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1209 05:40:46.641962 1422398 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-262540 --name newest-cni-262540 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-262540 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-262540 --network newest-cni-262540 --ip 192.168.76.2 --volume newest-cni-262540:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c
	I1209 05:40:46.963179 1422398 cli_runner.go:164] Run: docker container inspect newest-cni-262540 --format={{.State.Running}}
	I1209 05:40:46.990367 1422398 cli_runner.go:164] Run: docker container inspect newest-cni-262540 --format={{.State.Status}}
	I1209 05:40:47.023042 1422398 cli_runner.go:164] Run: docker exec newest-cni-262540 stat /var/lib/dpkg/alternatives/iptables
	I1209 05:40:47.074649 1422398 oci.go:144] the created container "newest-cni-262540" has a running status.
	I1209 05:40:47.074676 1422398 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22081-1142328/.minikube/machines/newest-cni-262540/id_rsa...
	I1209 05:40:47.692225 1422398 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22081-1142328/.minikube/machines/newest-cni-262540/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1209 05:40:47.718517 1422398 cli_runner.go:164] Run: docker container inspect newest-cni-262540 --format={{.State.Status}}
	I1209 05:40:47.740875 1422398 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1209 05:40:47.740894 1422398 kic_runner.go:114] Args: [docker exec --privileged newest-cni-262540 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1209 05:40:47.780644 1422398 cli_runner.go:164] Run: docker container inspect newest-cni-262540 --format={{.State.Status}}
	I1209 05:40:47.797898 1422398 machine.go:94] provisionDockerMachine start ...
	I1209 05:40:47.798001 1422398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-262540
	I1209 05:40:47.813940 1422398 main.go:143] libmachine: Using SSH client type: native
	I1209 05:40:47.814280 1422398 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 34205 <nil> <nil>}
	I1209 05:40:47.814295 1422398 main.go:143] libmachine: About to run SSH command:
	hostname
	I1209 05:40:47.814927 1422398 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1209 05:40:50.967418 1422398 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-262540
	
	I1209 05:40:50.967444 1422398 ubuntu.go:182] provisioning hostname "newest-cni-262540"
	I1209 05:40:50.967507 1422398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-262540
	I1209 05:40:50.983898 1422398 main.go:143] libmachine: Using SSH client type: native
	I1209 05:40:50.984244 1422398 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 34205 <nil> <nil>}
	I1209 05:40:50.984261 1422398 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-262540 && echo "newest-cni-262540" | sudo tee /etc/hostname
	I1209 05:40:51.158163 1422398 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-262540
	
	I1209 05:40:51.158329 1422398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-262540
	I1209 05:40:51.176198 1422398 main.go:143] libmachine: Using SSH client type: native
	I1209 05:40:51.176519 1422398 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 34205 <nil> <nil>}
	I1209 05:40:51.176535 1422398 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-262540' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-262540/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-262540' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 05:40:51.328246 1422398 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1209 05:40:51.328276 1422398 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22081-1142328/.minikube CaCertPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22081-1142328/.minikube}
	I1209 05:40:51.328348 1422398 ubuntu.go:190] setting up certificates
	I1209 05:40:51.328357 1422398 provision.go:84] configureAuth start
	I1209 05:40:51.328443 1422398 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-262540
	I1209 05:40:51.345620 1422398 provision.go:143] copyHostCerts
	I1209 05:40:51.345692 1422398 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-1142328/.minikube/key.pem, removing ...
	I1209 05:40:51.345702 1422398 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-1142328/.minikube/key.pem
	I1209 05:40:51.345782 1422398 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22081-1142328/.minikube/key.pem (1675 bytes)
	I1209 05:40:51.345892 1422398 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.pem, removing ...
	I1209 05:40:51.345903 1422398 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.pem
	I1209 05:40:51.345937 1422398 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.pem (1078 bytes)
	I1209 05:40:51.345995 1422398 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-1142328/.minikube/cert.pem, removing ...
	I1209 05:40:51.346004 1422398 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-1142328/.minikube/cert.pem
	I1209 05:40:51.346028 1422398 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22081-1142328/.minikube/cert.pem (1123 bytes)
	I1209 05:40:51.346078 1422398 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca-key.pem org=jenkins.newest-cni-262540 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-262540]
	I1209 05:40:51.459612 1422398 provision.go:177] copyRemoteCerts
	I1209 05:40:51.459736 1422398 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 05:40:51.459804 1422398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-262540
	I1209 05:40:51.477068 1422398 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34205 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/newest-cni-262540/id_rsa Username:docker}
	I1209 05:40:51.583430 1422398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1209 05:40:51.599930 1422398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1209 05:40:51.616188 1422398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1209 05:40:51.632654 1422398 provision.go:87] duration metric: took 304.27698ms to configureAuth
	I1209 05:40:51.632690 1422398 ubuntu.go:206] setting minikube options for container-runtime
	I1209 05:40:51.632889 1422398 config.go:182] Loaded profile config "newest-cni-262540": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1209 05:40:51.632903 1422398 machine.go:97] duration metric: took 3.834981835s to provisionDockerMachine
	I1209 05:40:51.632910 1422398 client.go:176] duration metric: took 10.386351456s to LocalClient.Create
	I1209 05:40:51.632935 1422398 start.go:167] duration metric: took 10.386419491s to libmachine.API.Create "newest-cni-262540"
	I1209 05:40:51.632946 1422398 start.go:293] postStartSetup for "newest-cni-262540" (driver="docker")
	I1209 05:40:51.632957 1422398 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 05:40:51.633024 1422398 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 05:40:51.633069 1422398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-262540
	I1209 05:40:51.648788 1422398 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34205 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/newest-cni-262540/id_rsa Username:docker}
	I1209 05:40:51.751770 1422398 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 05:40:51.754890 1422398 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1209 05:40:51.754915 1422398 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1209 05:40:51.754931 1422398 filesync.go:126] Scanning /home/jenkins/minikube-integration/22081-1142328/.minikube/addons for local assets ...
	I1209 05:40:51.754996 1422398 filesync.go:126] Scanning /home/jenkins/minikube-integration/22081-1142328/.minikube/files for local assets ...
	I1209 05:40:51.755088 1422398 filesync.go:149] local asset: /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/ssl/certs/11442312.pem -> 11442312.pem in /etc/ssl/certs
	I1209 05:40:51.755194 1422398 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 05:40:51.762311 1422398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/ssl/certs/11442312.pem --> /etc/ssl/certs/11442312.pem (1708 bytes)
	I1209 05:40:51.778994 1422398 start.go:296] duration metric: took 146.033857ms for postStartSetup
	I1209 05:40:51.779431 1422398 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-262540
	I1209 05:40:51.798065 1422398 profile.go:143] Saving config to /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/config.json ...
	I1209 05:40:51.798353 1422398 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1209 05:40:51.798402 1422398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-262540
	I1209 05:40:51.814583 1422398 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34205 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/newest-cni-262540/id_rsa Username:docker}
	I1209 05:40:51.917312 1422398 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1209 05:40:51.922304 1422398 start.go:128] duration metric: took 10.679457533s to createHost
	I1209 05:40:51.922328 1422398 start.go:83] releasing machines lock for "newest-cni-262540", held for 10.67959362s
	I1209 05:40:51.922409 1422398 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-262540
	I1209 05:40:51.939569 1422398 ssh_runner.go:195] Run: cat /version.json
	I1209 05:40:51.939636 1422398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-262540
	I1209 05:40:51.939638 1422398 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 05:40:51.939698 1422398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-262540
	I1209 05:40:51.960875 1422398 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34205 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/newest-cni-262540/id_rsa Username:docker}
	I1209 05:40:51.963453 1422398 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34205 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/newest-cni-262540/id_rsa Username:docker}
	I1209 05:40:52.063736 1422398 ssh_runner.go:195] Run: systemctl --version
	I1209 05:40:52.156351 1422398 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 05:40:52.160600 1422398 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 05:40:52.160672 1422398 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 05:40:52.187388 1422398 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1209 05:40:52.187415 1422398 start.go:496] detecting cgroup driver to use...
	I1209 05:40:52.187446 1422398 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1209 05:40:52.187504 1422398 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1209 05:40:52.203080 1422398 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1209 05:40:52.215843 1422398 docker.go:218] disabling cri-docker service (if available) ...
	I1209 05:40:52.215908 1422398 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 05:40:52.232148 1422398 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 05:40:52.250032 1422398 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 05:40:52.358548 1422398 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 05:40:52.481614 1422398 docker.go:234] disabling docker service ...
	I1209 05:40:52.481725 1422398 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 05:40:52.502779 1422398 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 05:40:52.515525 1422398 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 05:40:52.630357 1422398 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 05:40:52.754667 1422398 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 05:40:52.769286 1422398 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 05:40:52.785364 1422398 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1209 05:40:52.794252 1422398 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1209 05:40:52.803528 1422398 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1209 05:40:52.803619 1422398 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1209 05:40:52.812544 1422398 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1209 05:40:52.820837 1422398 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1209 05:40:52.829672 1422398 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1209 05:40:52.838554 1422398 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 05:40:52.846308 1422398 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1209 05:40:52.854529 1422398 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1209 05:40:52.863150 1422398 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1209 05:40:52.871579 1422398 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 05:40:52.878758 1422398 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 05:40:52.886006 1422398 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 05:40:53.012110 1422398 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1209 05:40:53.145258 1422398 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1209 05:40:53.145356 1422398 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1209 05:40:53.148998 1422398 start.go:564] Will wait 60s for crictl version
	I1209 05:40:53.149063 1422398 ssh_runner.go:195] Run: which crictl
	I1209 05:40:53.152446 1422398 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1209 05:40:53.177386 1422398 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1209 05:40:53.177452 1422398 ssh_runner.go:195] Run: containerd --version
	I1209 05:40:53.199507 1422398 ssh_runner.go:195] Run: containerd --version
	I1209 05:40:53.225320 1422398 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1209 05:40:53.228305 1422398 cli_runner.go:164] Run: docker network inspect newest-cni-262540 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1209 05:40:53.243962 1422398 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1209 05:40:53.247757 1422398 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 05:40:53.260215 1422398 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1209 05:40:53.262990 1422398 kubeadm.go:884] updating cluster {Name:newest-cni-262540 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-262540 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 05:40:53.263149 1422398 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1209 05:40:53.263229 1422398 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 05:40:53.289432 1422398 containerd.go:627] all images are preloaded for containerd runtime.
	I1209 05:40:53.289455 1422398 containerd.go:534] Images already preloaded, skipping extraction
	I1209 05:40:53.289546 1422398 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 05:40:53.312520 1422398 containerd.go:627] all images are preloaded for containerd runtime.
	I1209 05:40:53.312544 1422398 cache_images.go:86] Images are preloaded, skipping loading
	I1209 05:40:53.312552 1422398 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1209 05:40:53.312646 1422398 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-262540 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-262540 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 05:40:53.312713 1422398 ssh_runner.go:195] Run: sudo crictl info
	I1209 05:40:53.337527 1422398 cni.go:84] Creating CNI manager for ""
	I1209 05:40:53.337552 1422398 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1209 05:40:53.337571 1422398 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1209 05:40:53.337595 1422398 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-262540 NodeName:newest-cni-262540 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1209 05:40:53.337729 1422398 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-262540"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 05:40:53.337802 1422398 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1209 05:40:53.345447 1422398 binaries.go:51] Found k8s binaries, skipping transfer
	I1209 05:40:53.345517 1422398 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1209 05:40:53.352930 1422398 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1209 05:40:53.365409 1422398 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1209 05:40:53.377954 1422398 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1209 05:40:53.391187 1422398 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1209 05:40:53.394878 1422398 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 05:40:53.404484 1422398 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 05:40:53.509615 1422398 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 05:40:53.532992 1422398 certs.go:69] Setting up /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540 for IP: 192.168.76.2
	I1209 05:40:53.533014 1422398 certs.go:195] generating shared ca certs ...
	I1209 05:40:53.533065 1422398 certs.go:227] acquiring lock for ca certs: {Name:mk15788702f8c4e23b5aeab3f44961d296fab259 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 05:40:53.533239 1422398 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.key
	I1209 05:40:53.533305 1422398 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/proxy-client-ca.key
	I1209 05:40:53.533322 1422398 certs.go:257] generating profile certs ...
	I1209 05:40:53.533397 1422398 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/client.key
	I1209 05:40:53.533414 1422398 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/client.crt with IP's: []
	I1209 05:40:53.604706 1422398 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/client.crt ...
	I1209 05:40:53.604742 1422398 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/client.crt: {Name:mk908e1c63967383d20a56065c79b4bc0877c829 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 05:40:53.604954 1422398 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/client.key ...
	I1209 05:40:53.604968 1422398 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/client.key: {Name:mk0782d8c9cde6107bc905e7c1ffdb2b8a8e707c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 05:40:53.605064 1422398 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/apiserver.key.0ed49b31
	I1209 05:40:53.605085 1422398 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/apiserver.crt.0ed49b31 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1209 05:40:53.850901 1422398 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/apiserver.crt.0ed49b31 ...
	I1209 05:40:53.850943 1422398 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/apiserver.crt.0ed49b31: {Name:mkd1e6249eaef6a320629a45c3aa63c6b2fe9252 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 05:40:53.851131 1422398 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/apiserver.key.0ed49b31 ...
	I1209 05:40:53.851147 1422398 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/apiserver.key.0ed49b31: {Name:mk9df2970f8e62123fc8a73f846dec85a46dbe82 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 05:40:53.851239 1422398 certs.go:382] copying /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/apiserver.crt.0ed49b31 -> /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/apiserver.crt
	I1209 05:40:53.851366 1422398 certs.go:386] copying /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/apiserver.key.0ed49b31 -> /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/apiserver.key
	I1209 05:40:53.851432 1422398 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/proxy-client.key
	I1209 05:40:53.851456 1422398 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/proxy-client.crt with IP's: []
	I1209 05:40:54.332232 1422398 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/proxy-client.crt ...
	I1209 05:40:54.332268 1422398 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/proxy-client.crt: {Name:mk86c5c1261e1f4a7a13e3996ae202e7dfe017ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 05:40:54.332465 1422398 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/proxy-client.key ...
	I1209 05:40:54.332479 1422398 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/proxy-client.key: {Name:mk2b143aa140867219200e00888917dfd6928724 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 05:40:54.332672 1422398 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/1144231.pem (1338 bytes)
	W1209 05:40:54.332718 1422398 certs.go:480] ignoring /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/1144231_empty.pem, impossibly tiny 0 bytes
	I1209 05:40:54.332732 1422398 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca-key.pem (1679 bytes)
	I1209 05:40:54.332759 1422398 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem (1078 bytes)
	I1209 05:40:54.332787 1422398 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/cert.pem (1123 bytes)
	I1209 05:40:54.332816 1422398 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/key.pem (1675 bytes)
	I1209 05:40:54.332865 1422398 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/ssl/certs/11442312.pem (1708 bytes)
	I1209 05:40:54.333451 1422398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 05:40:54.351622 1422398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1209 05:40:54.369353 1422398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 05:40:54.386962 1422398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1209 05:40:54.405322 1422398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1209 05:40:54.422647 1422398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1209 05:40:54.483231 1422398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 05:40:54.515176 1422398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1209 05:40:54.533753 1422398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/ssl/certs/11442312.pem --> /usr/share/ca-certificates/11442312.pem (1708 bytes)
	I1209 05:40:54.552730 1422398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 05:40:54.570021 1422398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/1144231.pem --> /usr/share/ca-certificates/1144231.pem (1338 bytes)
	I1209 05:40:54.587455 1422398 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 05:40:54.600371 1422398 ssh_runner.go:195] Run: openssl version
	I1209 05:40:54.606642 1422398 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/11442312.pem
	I1209 05:40:54.613904 1422398 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/11442312.pem /etc/ssl/certs/11442312.pem
	I1209 05:40:54.621395 1422398 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11442312.pem
	I1209 05:40:54.624932 1422398 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 04:18 /usr/share/ca-certificates/11442312.pem
	I1209 05:40:54.625005 1422398 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11442312.pem
	I1209 05:40:54.665847 1422398 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1209 05:40:54.673355 1422398 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/11442312.pem /etc/ssl/certs/3ec20f2e.0
	I1209 05:40:54.680386 1422398 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1209 05:40:54.687518 1422398 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1209 05:40:54.694760 1422398 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 05:40:54.698200 1422398 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 04:09 /usr/share/ca-certificates/minikubeCA.pem
	I1209 05:40:54.698275 1422398 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 05:40:54.739105 1422398 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1209 05:40:54.746468 1422398 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1209 05:40:54.753754 1422398 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1144231.pem
	I1209 05:40:54.761267 1422398 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1144231.pem /etc/ssl/certs/1144231.pem
	I1209 05:40:54.768631 1422398 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1144231.pem
	I1209 05:40:54.772107 1422398 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 04:18 /usr/share/ca-certificates/1144231.pem
	I1209 05:40:54.772200 1422398 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1144231.pem
	I1209 05:40:54.812987 1422398 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1209 05:40:54.820239 1422398 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/1144231.pem /etc/ssl/certs/51391683.0
	I1209 05:40:54.827466 1422398 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 05:40:54.830847 1422398 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1209 05:40:54.830917 1422398 kubeadm.go:401] StartCluster: {Name:newest-cni-262540 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-262540 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 05:40:54.831012 1422398 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1209 05:40:54.831072 1422398 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 05:40:54.863416 1422398 cri.go:89] found id: ""
	I1209 05:40:54.863486 1422398 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1209 05:40:54.871043 1422398 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 05:40:54.878854 1422398 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1209 05:40:54.878952 1422398 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 05:40:54.886794 1422398 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 05:40:54.886847 1422398 kubeadm.go:158] found existing configuration files:
	
	I1209 05:40:54.886908 1422398 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1209 05:40:54.894435 1422398 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 05:40:54.894550 1422398 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 05:40:54.901704 1422398 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1209 05:40:54.909273 1422398 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 05:40:54.909385 1422398 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 05:40:54.916897 1422398 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1209 05:40:54.924926 1422398 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 05:40:54.925024 1422398 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 05:40:54.932137 1422398 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1209 05:40:54.939823 1422398 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 05:40:54.939911 1422398 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 05:40:54.947153 1422398 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1209 05:40:54.985945 1422398 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1209 05:40:54.986006 1422398 kubeadm.go:319] [preflight] Running pre-flight checks
	I1209 05:40:55.098038 1422398 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1209 05:40:55.098124 1422398 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1209 05:40:55.098168 1422398 kubeadm.go:319] OS: Linux
	I1209 05:40:55.098224 1422398 kubeadm.go:319] CGROUPS_CPU: enabled
	I1209 05:40:55.098279 1422398 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1209 05:40:55.098332 1422398 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1209 05:40:55.098392 1422398 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1209 05:40:55.098445 1422398 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1209 05:40:55.098502 1422398 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1209 05:40:55.098554 1422398 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1209 05:40:55.098607 1422398 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1209 05:40:55.098661 1422398 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1209 05:40:55.213327 1422398 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1209 05:40:55.213517 1422398 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1209 05:40:55.213698 1422398 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1209 05:40:55.232400 1422398 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1209 05:40:55.239096 1422398 out.go:252]   - Generating certificates and keys ...
	I1209 05:40:55.239277 1422398 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1209 05:40:55.239377 1422398 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1209 05:40:55.754714 1422398 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1209 05:40:56.183780 1422398 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1209 05:40:56.537089 1422398 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1209 05:40:56.838991 1422398 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1209 05:40:57.144061 1422398 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1209 05:40:57.144319 1422398 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-262540] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1209 05:40:57.237080 1422398 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1209 05:40:57.237305 1422398 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-262540] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1209 05:40:57.410307 1422398 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1209 05:40:57.494105 1422398 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1209 05:40:57.828849 1422398 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1209 05:40:57.829173 1422398 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1209 05:40:58.186047 1422398 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1209 05:40:58.553535 1422398 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1209 05:40:58.846953 1422398 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1209 05:40:59.216978 1422398 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1209 05:40:59.442501 1422398 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1209 05:40:59.443253 1422398 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1209 05:40:59.445958 1422398 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1209 05:40:59.449559 1422398 out.go:252]   - Booting up control plane ...
	I1209 05:40:59.449660 1422398 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1209 05:40:59.449739 1422398 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1209 05:40:59.449809 1422398 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1209 05:40:59.466855 1422398 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1209 05:40:59.467191 1422398 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1209 05:40:59.475169 1422398 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1209 05:40:59.475483 1422398 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1209 05:40:59.475706 1422398 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1209 05:40:59.606469 1422398 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1209 05:40:59.606609 1422398 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1209 05:43:42.940465 1404644 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001103315s
	I1209 05:43:42.940494 1404644 kubeadm.go:319] 
	I1209 05:43:42.940552 1404644 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1209 05:43:42.940585 1404644 kubeadm.go:319] 	- The kubelet is not running
	I1209 05:43:42.940690 1404644 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1209 05:43:42.940694 1404644 kubeadm.go:319] 
	I1209 05:43:42.940799 1404644 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1209 05:43:42.940831 1404644 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1209 05:43:42.940862 1404644 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1209 05:43:42.940866 1404644 kubeadm.go:319] 
	I1209 05:43:42.944449 1404644 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1209 05:43:42.944876 1404644 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1209 05:43:42.944989 1404644 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1209 05:43:42.945227 1404644 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1209 05:43:42.945235 1404644 kubeadm.go:319] 
	I1209 05:43:42.945305 1404644 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1209 05:43:42.945358 1404644 kubeadm.go:403] duration metric: took 8m7.791342576s to StartCluster
	I1209 05:43:42.945399 1404644 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:43:42.945466 1404644 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:43:42.969310 1404644 cri.go:89] found id: ""
	I1209 05:43:42.969335 1404644 logs.go:282] 0 containers: []
	W1209 05:43:42.969343 1404644 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:43:42.969349 1404644 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:43:42.969414 1404644 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:43:42.997525 1404644 cri.go:89] found id: ""
	I1209 05:43:42.997547 1404644 logs.go:282] 0 containers: []
	W1209 05:43:42.997556 1404644 logs.go:284] No container was found matching "etcd"
	I1209 05:43:42.997562 1404644 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:43:42.997619 1404644 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:43:43.022335 1404644 cri.go:89] found id: ""
	I1209 05:43:43.022360 1404644 logs.go:282] 0 containers: []
	W1209 05:43:43.022369 1404644 logs.go:284] No container was found matching "coredns"
	I1209 05:43:43.022380 1404644 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:43:43.022440 1404644 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:43:43.046700 1404644 cri.go:89] found id: ""
	I1209 05:43:43.046725 1404644 logs.go:282] 0 containers: []
	W1209 05:43:43.046734 1404644 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:43:43.046739 1404644 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:43:43.046797 1404644 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:43:43.071875 1404644 cri.go:89] found id: ""
	I1209 05:43:43.071906 1404644 logs.go:282] 0 containers: []
	W1209 05:43:43.071915 1404644 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:43:43.071921 1404644 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:43:43.071986 1404644 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:43:43.096153 1404644 cri.go:89] found id: ""
	I1209 05:43:43.096176 1404644 logs.go:282] 0 containers: []
	W1209 05:43:43.096190 1404644 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:43:43.096198 1404644 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:43:43.096259 1404644 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:43:43.121898 1404644 cri.go:89] found id: ""
	I1209 05:43:43.121922 1404644 logs.go:282] 0 containers: []
	W1209 05:43:43.121931 1404644 logs.go:284] No container was found matching "kindnet"
	I1209 05:43:43.121940 1404644 logs.go:123] Gathering logs for containerd ...
	I1209 05:43:43.121951 1404644 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:43:43.163306 1404644 logs.go:123] Gathering logs for container status ...
	I1209 05:43:43.163339 1404644 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:43:43.207532 1404644 logs.go:123] Gathering logs for kubelet ...
	I1209 05:43:43.207567 1404644 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:43:43.277243 1404644 logs.go:123] Gathering logs for dmesg ...
	I1209 05:43:43.277279 1404644 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:43:43.298477 1404644 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:43:43.298507 1404644 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:43:43.365347 1404644 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:43:43.357461    5461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:43:43.358090    5461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:43:43.359781    5461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:43:43.360115    5461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:43:43.361539    5461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:43:43.357461    5461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:43:43.358090    5461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:43:43.359781    5461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:43:43.360115    5461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:43:43.361539    5461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1209 05:43:43.365381 1404644 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001103315s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1209 05:43:43.365413 1404644 out.go:285] * 
	W1209 05:43:43.365475 1404644 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001103315s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1209 05:43:43.365493 1404644 out.go:285] * 
	W1209 05:43:43.367868 1404644 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 05:43:43.374302 1404644 out.go:203] 
	W1209 05:43:43.377044 1404644 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001103315s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1209 05:43:43.377091 1404644 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1209 05:43:43.377112 1404644 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1209 05:43:43.380387 1404644 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 09 05:35:22 no-preload-842269 containerd[758]: time="2025-12-09T05:35:22.036308692Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-scheduler:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 09 05:35:23 no-preload-842269 containerd[758]: time="2025-12-09T05:35:23.816614233Z" level=info msg="No images store for sha256:eb9020767c0d3bbd754f3f52cbe4c8bdd935dd5862604d6dc0b1f10422189544"
	Dec 09 05:35:23 no-preload-842269 containerd[758]: time="2025-12-09T05:35:23.819781105Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.0-beta.0\""
	Dec 09 05:35:23 no-preload-842269 containerd[758]: time="2025-12-09T05:35:23.827929881Z" level=info msg="ImageCreate event name:\"sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 09 05:35:23 no-preload-842269 containerd[758]: time="2025-12-09T05:35:23.828477017Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-proxy:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 09 05:35:25 no-preload-842269 containerd[758]: time="2025-12-09T05:35:25.333353027Z" level=info msg="No images store for sha256:5ed8f231f07481c657ad0e1d039921948e7abbc30ef6215465129012c4c4a508"
	Dec 09 05:35:25 no-preload-842269 containerd[758]: time="2025-12-09T05:35:25.336577825Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-beta.0\""
	Dec 09 05:35:25 no-preload-842269 containerd[758]: time="2025-12-09T05:35:25.345148784Z" level=info msg="ImageCreate event name:\"sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 09 05:35:25 no-preload-842269 containerd[758]: time="2025-12-09T05:35:25.345908041Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 09 05:35:27 no-preload-842269 containerd[758]: time="2025-12-09T05:35:27.120561243Z" level=info msg="No images store for sha256:64f3fb0a3392f487dbd4300c920f76dc3de2961e11fd6bfbedc75c0d25b1954c"
	Dec 09 05:35:27 no-preload-842269 containerd[758]: time="2025-12-09T05:35:27.122833000Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-beta.0\""
	Dec 09 05:35:27 no-preload-842269 containerd[758]: time="2025-12-09T05:35:27.131014710Z" level=info msg="ImageCreate event name:\"sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 09 05:35:27 no-preload-842269 containerd[758]: time="2025-12-09T05:35:27.132076917Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 09 05:35:28 no-preload-842269 containerd[758]: time="2025-12-09T05:35:28.575071692Z" level=info msg="No images store for sha256:5e4a4fe83792bf529a4e283e09069cf50cc9882d04168a33903ed6809a492e61"
	Dec 09 05:35:28 no-preload-842269 containerd[758]: time="2025-12-09T05:35:28.577744678Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\""
	Dec 09 05:35:28 no-preload-842269 containerd[758]: time="2025-12-09T05:35:28.588706439Z" level=info msg="ImageCreate event name:\"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 09 05:35:28 no-preload-842269 containerd[758]: time="2025-12-09T05:35:28.589547631Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 09 05:35:30 no-preload-842269 containerd[758]: time="2025-12-09T05:35:30.380855874Z" level=info msg="No images store for sha256:89a52ae86f116708cd5ba0d54dfbf2ae3011f126ee9161c4afb19bf2a51ef285"
	Dec 09 05:35:30 no-preload-842269 containerd[758]: time="2025-12-09T05:35:30.384556393Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\""
	Dec 09 05:35:30 no-preload-842269 containerd[758]: time="2025-12-09T05:35:30.398357527Z" level=info msg="ImageCreate event name:\"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 09 05:35:30 no-preload-842269 containerd[758]: time="2025-12-09T05:35:30.402958452Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 09 05:35:30 no-preload-842269 containerd[758]: time="2025-12-09T05:35:30.951034968Z" level=info msg="No images store for sha256:7475c7d18769df89a804d5bebf679dbf94886f3626f07a2be923beaa0cc7e5b0"
	Dec 09 05:35:30 no-preload-842269 containerd[758]: time="2025-12-09T05:35:30.953249078Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\""
	Dec 09 05:35:30 no-preload-842269 containerd[758]: time="2025-12-09T05:35:30.967195965Z" level=info msg="ImageCreate event name:\"sha256:66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 09 05:35:30 no-preload-842269 containerd[758]: time="2025-12-09T05:35:30.967501105Z" level=info msg="ImageUpdate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:43:44.461553    5564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:43:44.462177    5564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:43:44.463710    5564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:43:44.464248    5564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:43:44.465881    5564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +25.904254] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:14] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:16] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:18] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:19] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:30] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:32] overlayfs: idmapped layers are currently not supported
	[ +28.114653] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:33] overlayfs: idmapped layers are currently not supported
	[ +23.720849] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:34] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:35] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:36] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:37] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:38] overlayfs: idmapped layers are currently not supported
	[ +23.656275] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:39] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:57] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:58] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:00] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:02] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:03] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:05] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:06] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 9 05:31] overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	
	
	==> kernel <==
	 05:43:44 up  8:25,  0 user,  load average: 0.18, 1.31, 1.80
	Linux no-preload-842269 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 09 05:43:40 no-preload-842269 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 09 05:43:41 no-preload-842269 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 318.
	Dec 09 05:43:41 no-preload-842269 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 05:43:41 no-preload-842269 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 05:43:41 no-preload-842269 kubelet[5369]: E1209 05:43:41.732201    5369 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 09 05:43:41 no-preload-842269 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 09 05:43:41 no-preload-842269 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 09 05:43:42 no-preload-842269 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 319.
	Dec 09 05:43:42 no-preload-842269 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 05:43:42 no-preload-842269 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 05:43:42 no-preload-842269 kubelet[5375]: E1209 05:43:42.497564    5375 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 09 05:43:42 no-preload-842269 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 09 05:43:42 no-preload-842269 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 09 05:43:43 no-preload-842269 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 09 05:43:43 no-preload-842269 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 05:43:43 no-preload-842269 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 05:43:43 no-preload-842269 kubelet[5446]: E1209 05:43:43.286563    5446 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 09 05:43:43 no-preload-842269 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 09 05:43:43 no-preload-842269 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 09 05:43:43 no-preload-842269 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 09 05:43:43 no-preload-842269 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 05:43:43 no-preload-842269 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 05:43:44 no-preload-842269 kubelet[5480]: E1209 05:43:44.043906    5480 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 09 05:43:44 no-preload-842269 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 09 05:43:44 no-preload-842269 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-842269 -n no-preload-842269
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-842269 -n no-preload-842269: exit status 6 (362.842488ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1209 05:43:44.933421 1427077 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-842269" does not appear in /home/jenkins/minikube-integration/22081-1142328/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "no-preload-842269" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (515.83s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (503.74s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-262540 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
E1209 05:41:06.731718 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/addons-221952/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 05:41:10.793313 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/old-k8s-version-384009/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 05:42:38.985860 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-717497/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 05:43:26.932560 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/old-k8s-version-384009/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p newest-cni-262540 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: exit status 109 (8m22.17508495s)

                                                
                                                
-- stdout --
	* [newest-cni-262540] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22081
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22081-1142328/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-1142328/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "newest-cni-262540" primary control-plane node in "newest-cni-262540" cluster
	* Pulling base image v0.0.48-1765184860-22066 ...
	* Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	  - kubeadm.pod-network-cidr=10.42.0.0/16
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 05:40:41.014166 1422398 out.go:360] Setting OutFile to fd 1 ...
	I1209 05:40:41.014346 1422398 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 05:40:41.014376 1422398 out.go:374] Setting ErrFile to fd 2...
	I1209 05:40:41.014403 1422398 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 05:40:41.014777 1422398 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-1142328/.minikube/bin
	I1209 05:40:41.015346 1422398 out.go:368] Setting JSON to false
	I1209 05:40:41.016651 1422398 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":30164,"bootTime":1765228677,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1209 05:40:41.016752 1422398 start.go:143] virtualization:  
	I1209 05:40:41.020737 1422398 out.go:179] * [newest-cni-262540] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1209 05:40:41.025100 1422398 out.go:179]   - MINIKUBE_LOCATION=22081
	I1209 05:40:41.025177 1422398 notify.go:221] Checking for updates...
	I1209 05:40:41.031377 1422398 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 05:40:41.034527 1422398 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22081-1142328/kubeconfig
	I1209 05:40:41.037660 1422398 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-1142328/.minikube
	I1209 05:40:41.040646 1422398 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1209 05:40:41.043555 1422398 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 05:40:41.047098 1422398 config.go:182] Loaded profile config "no-preload-842269": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1209 05:40:41.047203 1422398 driver.go:422] Setting default libvirt URI to qemu:///system
	I1209 05:40:41.082759 1422398 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1209 05:40:41.082877 1422398 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 05:40:41.141221 1422398 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-09 05:40:41.131267754 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1209 05:40:41.141331 1422398 docker.go:319] overlay module found
	I1209 05:40:41.144673 1422398 out.go:179] * Using the docker driver based on user configuration
	I1209 05:40:41.147595 1422398 start.go:309] selected driver: docker
	I1209 05:40:41.147618 1422398 start.go:927] validating driver "docker" against <nil>
	I1209 05:40:41.147633 1422398 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 05:40:41.148480 1422398 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 05:40:41.205051 1422398 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-09 05:40:41.195808894 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1209 05:40:41.205216 1422398 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1209 05:40:41.205249 1422398 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1209 05:40:41.205488 1422398 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1209 05:40:41.208233 1422398 out.go:179] * Using Docker driver with root privileges
	I1209 05:40:41.211172 1422398 cni.go:84] Creating CNI manager for ""
	I1209 05:40:41.211250 1422398 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1209 05:40:41.211263 1422398 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1209 05:40:41.211347 1422398 start.go:353] cluster config:
	{Name:newest-cni-262540 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-262540 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 05:40:41.214410 1422398 out.go:179] * Starting "newest-cni-262540" primary control-plane node in "newest-cni-262540" cluster
	I1209 05:40:41.217388 1422398 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1209 05:40:41.220416 1422398 out.go:179] * Pulling base image v0.0.48-1765184860-22066 ...
	I1209 05:40:41.223240 1422398 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1209 05:40:41.223288 1422398 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1209 05:40:41.223310 1422398 cache.go:65] Caching tarball of preloaded images
	I1209 05:40:41.223322 1422398 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local docker daemon
	I1209 05:40:41.223405 1422398 preload.go:238] Found /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1209 05:40:41.223416 1422398 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1209 05:40:41.223520 1422398 profile.go:143] Saving config to /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/config.json ...
	I1209 05:40:41.223546 1422398 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/config.json: {Name:mk3f2f0447b25b9c02ca47937d45ed297c23b284 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 05:40:41.242533 1422398 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local docker daemon, skipping pull
	I1209 05:40:41.242556 1422398 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c exists in daemon, skipping load
	I1209 05:40:41.242574 1422398 cache.go:243] Successfully downloaded all kic artifacts
	I1209 05:40:41.242607 1422398 start.go:360] acquireMachinesLock for newest-cni-262540: {Name:mk272d84ff1bc8c8949f2f0b1f608a7519899d10 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 05:40:41.242722 1422398 start.go:364] duration metric: took 94.012µs to acquireMachinesLock for "newest-cni-262540"
	I1209 05:40:41.242752 1422398 start.go:93] Provisioning new machine with config: &{Name:newest-cni-262540 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-262540 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1209 05:40:41.242832 1422398 start.go:125] createHost starting for "" (driver="docker")
	I1209 05:40:41.246278 1422398 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1209 05:40:41.246513 1422398 start.go:159] libmachine.API.Create for "newest-cni-262540" (driver="docker")
	I1209 05:40:41.246549 1422398 client.go:173] LocalClient.Create starting
	I1209 05:40:41.246618 1422398 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem
	I1209 05:40:41.246653 1422398 main.go:143] libmachine: Decoding PEM data...
	I1209 05:40:41.246672 1422398 main.go:143] libmachine: Parsing certificate...
	I1209 05:40:41.246730 1422398 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/cert.pem
	I1209 05:40:41.246753 1422398 main.go:143] libmachine: Decoding PEM data...
	I1209 05:40:41.246765 1422398 main.go:143] libmachine: Parsing certificate...
	I1209 05:40:41.247138 1422398 cli_runner.go:164] Run: docker network inspect newest-cni-262540 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1209 05:40:41.262988 1422398 cli_runner.go:211] docker network inspect newest-cni-262540 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1209 05:40:41.263073 1422398 network_create.go:284] running [docker network inspect newest-cni-262540] to gather additional debugging logs...
	I1209 05:40:41.263095 1422398 cli_runner.go:164] Run: docker network inspect newest-cni-262540
	W1209 05:40:41.279120 1422398 cli_runner.go:211] docker network inspect newest-cni-262540 returned with exit code 1
	I1209 05:40:41.279154 1422398 network_create.go:287] error running [docker network inspect newest-cni-262540]: docker network inspect newest-cni-262540: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-262540 not found
	I1209 05:40:41.279168 1422398 network_create.go:289] output of [docker network inspect newest-cni-262540]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-262540 not found
	
	** /stderr **
	I1209 05:40:41.279286 1422398 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1209 05:40:41.295748 1422398 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-7a15eec16b1a IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:8a:b7:58:bc:12:6c} reservation:<nil>}
	I1209 05:40:41.296192 1422398 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-fcb9e6b38e8e IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:56:c3:7a:b4:06:4b} reservation:<nil>}
	I1209 05:40:41.296445 1422398 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-8c1346c67d6b IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:82:10:14:75:55:fb} reservation:<nil>}
	I1209 05:40:41.296875 1422398 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019e80f0}
	I1209 05:40:41.296895 1422398 network_create.go:124] attempt to create docker network newest-cni-262540 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1209 05:40:41.296949 1422398 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-262540 newest-cni-262540
	I1209 05:40:41.356493 1422398 network_create.go:108] docker network newest-cni-262540 192.168.76.0/24 created
	I1209 05:40:41.356525 1422398 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-262540" container
	I1209 05:40:41.356609 1422398 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1209 05:40:41.372493 1422398 cli_runner.go:164] Run: docker volume create newest-cni-262540 --label name.minikube.sigs.k8s.io=newest-cni-262540 --label created_by.minikube.sigs.k8s.io=true
	I1209 05:40:41.390479 1422398 oci.go:103] Successfully created a docker volume newest-cni-262540
	I1209 05:40:41.390571 1422398 cli_runner.go:164] Run: docker run --rm --name newest-cni-262540-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-262540 --entrypoint /usr/bin/test -v newest-cni-262540:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c -d /var/lib
	I1209 05:40:41.957365 1422398 oci.go:107] Successfully prepared a docker volume newest-cni-262540
	I1209 05:40:41.957440 1422398 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1209 05:40:41.957454 1422398 kic.go:194] Starting extracting preloaded images to volume ...
	I1209 05:40:41.957523 1422398 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-262540:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c -I lz4 -xf /preloaded.tar -C /extractDir
	I1209 05:40:46.577478 1422398 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-262540:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c -I lz4 -xf /preloaded.tar -C /extractDir: (4.619919939s)
	I1209 05:40:46.577511 1422398 kic.go:203] duration metric: took 4.620053703s to extract preloaded images to volume ...
	W1209 05:40:46.577655 1422398 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1209 05:40:46.577765 1422398 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1209 05:40:46.641962 1422398 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-262540 --name newest-cni-262540 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-262540 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-262540 --network newest-cni-262540 --ip 192.168.76.2 --volume newest-cni-262540:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c
	I1209 05:40:46.963179 1422398 cli_runner.go:164] Run: docker container inspect newest-cni-262540 --format={{.State.Running}}
	I1209 05:40:46.990367 1422398 cli_runner.go:164] Run: docker container inspect newest-cni-262540 --format={{.State.Status}}
	I1209 05:40:47.023042 1422398 cli_runner.go:164] Run: docker exec newest-cni-262540 stat /var/lib/dpkg/alternatives/iptables
	I1209 05:40:47.074649 1422398 oci.go:144] the created container "newest-cni-262540" has a running status.
	I1209 05:40:47.074676 1422398 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22081-1142328/.minikube/machines/newest-cni-262540/id_rsa...
	I1209 05:40:47.692225 1422398 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22081-1142328/.minikube/machines/newest-cni-262540/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1209 05:40:47.718517 1422398 cli_runner.go:164] Run: docker container inspect newest-cni-262540 --format={{.State.Status}}
	I1209 05:40:47.740875 1422398 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1209 05:40:47.740894 1422398 kic_runner.go:114] Args: [docker exec --privileged newest-cni-262540 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1209 05:40:47.780644 1422398 cli_runner.go:164] Run: docker container inspect newest-cni-262540 --format={{.State.Status}}
	I1209 05:40:47.797898 1422398 machine.go:94] provisionDockerMachine start ...
	I1209 05:40:47.798001 1422398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-262540
	I1209 05:40:47.813940 1422398 main.go:143] libmachine: Using SSH client type: native
	I1209 05:40:47.814280 1422398 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 34205 <nil> <nil>}
	I1209 05:40:47.814295 1422398 main.go:143] libmachine: About to run SSH command:
	hostname
	I1209 05:40:47.814927 1422398 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1209 05:40:50.967418 1422398 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-262540
	
	I1209 05:40:50.967444 1422398 ubuntu.go:182] provisioning hostname "newest-cni-262540"
	I1209 05:40:50.967507 1422398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-262540
	I1209 05:40:50.983898 1422398 main.go:143] libmachine: Using SSH client type: native
	I1209 05:40:50.984244 1422398 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 34205 <nil> <nil>}
	I1209 05:40:50.984261 1422398 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-262540 && echo "newest-cni-262540" | sudo tee /etc/hostname
	I1209 05:40:51.158163 1422398 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-262540
	
	I1209 05:40:51.158329 1422398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-262540
	I1209 05:40:51.176198 1422398 main.go:143] libmachine: Using SSH client type: native
	I1209 05:40:51.176519 1422398 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 34205 <nil> <nil>}
	I1209 05:40:51.176535 1422398 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-262540' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-262540/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-262540' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 05:40:51.328246 1422398 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1209 05:40:51.328276 1422398 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22081-1142328/.minikube CaCertPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22081-1142328/.minikube}
	I1209 05:40:51.328348 1422398 ubuntu.go:190] setting up certificates
	I1209 05:40:51.328357 1422398 provision.go:84] configureAuth start
	I1209 05:40:51.328443 1422398 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-262540
	I1209 05:40:51.345620 1422398 provision.go:143] copyHostCerts
	I1209 05:40:51.345692 1422398 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-1142328/.minikube/key.pem, removing ...
	I1209 05:40:51.345702 1422398 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-1142328/.minikube/key.pem
	I1209 05:40:51.345782 1422398 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22081-1142328/.minikube/key.pem (1675 bytes)
	I1209 05:40:51.345892 1422398 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.pem, removing ...
	I1209 05:40:51.345903 1422398 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.pem
	I1209 05:40:51.345937 1422398 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.pem (1078 bytes)
	I1209 05:40:51.345995 1422398 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-1142328/.minikube/cert.pem, removing ...
	I1209 05:40:51.346004 1422398 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-1142328/.minikube/cert.pem
	I1209 05:40:51.346028 1422398 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22081-1142328/.minikube/cert.pem (1123 bytes)
	I1209 05:40:51.346078 1422398 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca-key.pem org=jenkins.newest-cni-262540 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-262540]
	I1209 05:40:51.459612 1422398 provision.go:177] copyRemoteCerts
	I1209 05:40:51.459736 1422398 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 05:40:51.459804 1422398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-262540
	I1209 05:40:51.477068 1422398 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34205 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/newest-cni-262540/id_rsa Username:docker}
	I1209 05:40:51.583430 1422398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1209 05:40:51.599930 1422398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1209 05:40:51.616188 1422398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1209 05:40:51.632654 1422398 provision.go:87] duration metric: took 304.27698ms to configureAuth
	I1209 05:40:51.632690 1422398 ubuntu.go:206] setting minikube options for container-runtime
	I1209 05:40:51.632889 1422398 config.go:182] Loaded profile config "newest-cni-262540": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1209 05:40:51.632903 1422398 machine.go:97] duration metric: took 3.834981835s to provisionDockerMachine
	I1209 05:40:51.632910 1422398 client.go:176] duration metric: took 10.386351456s to LocalClient.Create
	I1209 05:40:51.632935 1422398 start.go:167] duration metric: took 10.386419491s to libmachine.API.Create "newest-cni-262540"
	I1209 05:40:51.632946 1422398 start.go:293] postStartSetup for "newest-cni-262540" (driver="docker")
	I1209 05:40:51.632957 1422398 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 05:40:51.633024 1422398 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 05:40:51.633069 1422398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-262540
	I1209 05:40:51.648788 1422398 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34205 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/newest-cni-262540/id_rsa Username:docker}
	I1209 05:40:51.751770 1422398 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 05:40:51.754890 1422398 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1209 05:40:51.754915 1422398 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1209 05:40:51.754931 1422398 filesync.go:126] Scanning /home/jenkins/minikube-integration/22081-1142328/.minikube/addons for local assets ...
	I1209 05:40:51.754996 1422398 filesync.go:126] Scanning /home/jenkins/minikube-integration/22081-1142328/.minikube/files for local assets ...
	I1209 05:40:51.755088 1422398 filesync.go:149] local asset: /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/ssl/certs/11442312.pem -> 11442312.pem in /etc/ssl/certs
	I1209 05:40:51.755194 1422398 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 05:40:51.762311 1422398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/ssl/certs/11442312.pem --> /etc/ssl/certs/11442312.pem (1708 bytes)
	I1209 05:40:51.778994 1422398 start.go:296] duration metric: took 146.033857ms for postStartSetup
	I1209 05:40:51.779431 1422398 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-262540
	I1209 05:40:51.798065 1422398 profile.go:143] Saving config to /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/config.json ...
	I1209 05:40:51.798353 1422398 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1209 05:40:51.798402 1422398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-262540
	I1209 05:40:51.814583 1422398 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34205 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/newest-cni-262540/id_rsa Username:docker}
	I1209 05:40:51.917312 1422398 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1209 05:40:51.922304 1422398 start.go:128] duration metric: took 10.679457533s to createHost
	I1209 05:40:51.922328 1422398 start.go:83] releasing machines lock for "newest-cni-262540", held for 10.67959362s
	I1209 05:40:51.922409 1422398 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-262540
	I1209 05:40:51.939569 1422398 ssh_runner.go:195] Run: cat /version.json
	I1209 05:40:51.939636 1422398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-262540
	I1209 05:40:51.939638 1422398 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 05:40:51.939698 1422398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-262540
	I1209 05:40:51.960875 1422398 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34205 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/newest-cni-262540/id_rsa Username:docker}
	I1209 05:40:51.963453 1422398 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34205 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/newest-cni-262540/id_rsa Username:docker}
	I1209 05:40:52.063736 1422398 ssh_runner.go:195] Run: systemctl --version
	I1209 05:40:52.156351 1422398 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 05:40:52.160600 1422398 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 05:40:52.160672 1422398 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 05:40:52.187388 1422398 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1209 05:40:52.187415 1422398 start.go:496] detecting cgroup driver to use...
	I1209 05:40:52.187446 1422398 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1209 05:40:52.187504 1422398 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1209 05:40:52.203080 1422398 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1209 05:40:52.215843 1422398 docker.go:218] disabling cri-docker service (if available) ...
	I1209 05:40:52.215908 1422398 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 05:40:52.232148 1422398 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 05:40:52.250032 1422398 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 05:40:52.358548 1422398 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 05:40:52.481614 1422398 docker.go:234] disabling docker service ...
	I1209 05:40:52.481725 1422398 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 05:40:52.502779 1422398 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 05:40:52.515525 1422398 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 05:40:52.630357 1422398 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 05:40:52.754667 1422398 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 05:40:52.769286 1422398 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 05:40:52.785364 1422398 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1209 05:40:52.794252 1422398 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1209 05:40:52.803528 1422398 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1209 05:40:52.803619 1422398 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1209 05:40:52.812544 1422398 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1209 05:40:52.820837 1422398 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1209 05:40:52.829672 1422398 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1209 05:40:52.838554 1422398 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 05:40:52.846308 1422398 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1209 05:40:52.854529 1422398 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1209 05:40:52.863150 1422398 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1209 05:40:52.871579 1422398 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 05:40:52.878758 1422398 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 05:40:52.886006 1422398 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 05:40:53.012110 1422398 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1209 05:40:53.145258 1422398 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1209 05:40:53.145356 1422398 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1209 05:40:53.148998 1422398 start.go:564] Will wait 60s for crictl version
	I1209 05:40:53.149063 1422398 ssh_runner.go:195] Run: which crictl
	I1209 05:40:53.152446 1422398 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1209 05:40:53.177386 1422398 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1209 05:40:53.177452 1422398 ssh_runner.go:195] Run: containerd --version
	I1209 05:40:53.199507 1422398 ssh_runner.go:195] Run: containerd --version
	I1209 05:40:53.225320 1422398 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1209 05:40:53.228305 1422398 cli_runner.go:164] Run: docker network inspect newest-cni-262540 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1209 05:40:53.243962 1422398 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1209 05:40:53.247757 1422398 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 05:40:53.260215 1422398 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1209 05:40:53.262990 1422398 kubeadm.go:884] updating cluster {Name:newest-cni-262540 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-262540 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 05:40:53.263149 1422398 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1209 05:40:53.263229 1422398 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 05:40:53.289432 1422398 containerd.go:627] all images are preloaded for containerd runtime.
	I1209 05:40:53.289455 1422398 containerd.go:534] Images already preloaded, skipping extraction
	I1209 05:40:53.289546 1422398 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 05:40:53.312520 1422398 containerd.go:627] all images are preloaded for containerd runtime.
	I1209 05:40:53.312544 1422398 cache_images.go:86] Images are preloaded, skipping loading
	I1209 05:40:53.312552 1422398 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1209 05:40:53.312646 1422398 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-262540 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-262540 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 05:40:53.312713 1422398 ssh_runner.go:195] Run: sudo crictl info
	I1209 05:40:53.337527 1422398 cni.go:84] Creating CNI manager for ""
	I1209 05:40:53.337552 1422398 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1209 05:40:53.337571 1422398 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1209 05:40:53.337595 1422398 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-262540 NodeName:newest-cni-262540 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1209 05:40:53.337729 1422398 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-262540"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 05:40:53.337802 1422398 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1209 05:40:53.345447 1422398 binaries.go:51] Found k8s binaries, skipping transfer
	I1209 05:40:53.345517 1422398 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1209 05:40:53.352930 1422398 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1209 05:40:53.365409 1422398 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1209 05:40:53.377954 1422398 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1209 05:40:53.391187 1422398 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1209 05:40:53.394878 1422398 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 05:40:53.404484 1422398 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 05:40:53.509615 1422398 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 05:40:53.532992 1422398 certs.go:69] Setting up /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540 for IP: 192.168.76.2
	I1209 05:40:53.533014 1422398 certs.go:195] generating shared ca certs ...
	I1209 05:40:53.533065 1422398 certs.go:227] acquiring lock for ca certs: {Name:mk15788702f8c4e23b5aeab3f44961d296fab259 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 05:40:53.533239 1422398 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.key
	I1209 05:40:53.533305 1422398 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/proxy-client-ca.key
	I1209 05:40:53.533322 1422398 certs.go:257] generating profile certs ...
	I1209 05:40:53.533397 1422398 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/client.key
	I1209 05:40:53.533414 1422398 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/client.crt with IP's: []
	I1209 05:40:53.604706 1422398 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/client.crt ...
	I1209 05:40:53.604742 1422398 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/client.crt: {Name:mk908e1c63967383d20a56065c79b4bc0877c829 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 05:40:53.604954 1422398 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/client.key ...
	I1209 05:40:53.604968 1422398 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/client.key: {Name:mk0782d8c9cde6107bc905e7c1ffdb2b8a8e707c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 05:40:53.605064 1422398 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/apiserver.key.0ed49b31
	I1209 05:40:53.605085 1422398 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/apiserver.crt.0ed49b31 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1209 05:40:53.850901 1422398 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/apiserver.crt.0ed49b31 ...
	I1209 05:40:53.850943 1422398 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/apiserver.crt.0ed49b31: {Name:mkd1e6249eaef6a320629a45c3aa63c6b2fe9252 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 05:40:53.851131 1422398 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/apiserver.key.0ed49b31 ...
	I1209 05:40:53.851147 1422398 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/apiserver.key.0ed49b31: {Name:mk9df2970f8e62123fc8a73f846dec85a46dbe82 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 05:40:53.851239 1422398 certs.go:382] copying /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/apiserver.crt.0ed49b31 -> /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/apiserver.crt
	I1209 05:40:53.851366 1422398 certs.go:386] copying /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/apiserver.key.0ed49b31 -> /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/apiserver.key
	I1209 05:40:53.851432 1422398 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/proxy-client.key
	I1209 05:40:53.851456 1422398 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/proxy-client.crt with IP's: []
	I1209 05:40:54.332232 1422398 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/proxy-client.crt ...
	I1209 05:40:54.332268 1422398 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/proxy-client.crt: {Name:mk86c5c1261e1f4a7a13e3996ae202e7dfe017ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 05:40:54.332465 1422398 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/proxy-client.key ...
	I1209 05:40:54.332479 1422398 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/proxy-client.key: {Name:mk2b143aa140867219200e00888917dfd6928724 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 05:40:54.332672 1422398 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/1144231.pem (1338 bytes)
	W1209 05:40:54.332718 1422398 certs.go:480] ignoring /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/1144231_empty.pem, impossibly tiny 0 bytes
	I1209 05:40:54.332732 1422398 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca-key.pem (1679 bytes)
	I1209 05:40:54.332759 1422398 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem (1078 bytes)
	I1209 05:40:54.332787 1422398 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/cert.pem (1123 bytes)
	I1209 05:40:54.332816 1422398 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/key.pem (1675 bytes)
	I1209 05:40:54.332865 1422398 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/ssl/certs/11442312.pem (1708 bytes)
	I1209 05:40:54.333451 1422398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 05:40:54.351622 1422398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1209 05:40:54.369353 1422398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 05:40:54.386962 1422398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1209 05:40:54.405322 1422398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1209 05:40:54.422647 1422398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1209 05:40:54.483231 1422398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 05:40:54.515176 1422398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1209 05:40:54.533753 1422398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/ssl/certs/11442312.pem --> /usr/share/ca-certificates/11442312.pem (1708 bytes)
	I1209 05:40:54.552730 1422398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 05:40:54.570021 1422398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/1144231.pem --> /usr/share/ca-certificates/1144231.pem (1338 bytes)
	I1209 05:40:54.587455 1422398 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 05:40:54.600371 1422398 ssh_runner.go:195] Run: openssl version
	I1209 05:40:54.606642 1422398 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/11442312.pem
	I1209 05:40:54.613904 1422398 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/11442312.pem /etc/ssl/certs/11442312.pem
	I1209 05:40:54.621395 1422398 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11442312.pem
	I1209 05:40:54.624932 1422398 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 04:18 /usr/share/ca-certificates/11442312.pem
	I1209 05:40:54.625005 1422398 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11442312.pem
	I1209 05:40:54.665847 1422398 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1209 05:40:54.673355 1422398 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/11442312.pem /etc/ssl/certs/3ec20f2e.0
	I1209 05:40:54.680386 1422398 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1209 05:40:54.687518 1422398 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1209 05:40:54.694760 1422398 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 05:40:54.698200 1422398 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 04:09 /usr/share/ca-certificates/minikubeCA.pem
	I1209 05:40:54.698275 1422398 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 05:40:54.739105 1422398 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1209 05:40:54.746468 1422398 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1209 05:40:54.753754 1422398 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1144231.pem
	I1209 05:40:54.761267 1422398 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1144231.pem /etc/ssl/certs/1144231.pem
	I1209 05:40:54.768631 1422398 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1144231.pem
	I1209 05:40:54.772107 1422398 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 04:18 /usr/share/ca-certificates/1144231.pem
	I1209 05:40:54.772200 1422398 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1144231.pem
	I1209 05:40:54.812987 1422398 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1209 05:40:54.820239 1422398 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/1144231.pem /etc/ssl/certs/51391683.0
	I1209 05:40:54.827466 1422398 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 05:40:54.830847 1422398 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1209 05:40:54.830917 1422398 kubeadm.go:401] StartCluster: {Name:newest-cni-262540 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-262540 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 05:40:54.831012 1422398 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1209 05:40:54.831072 1422398 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 05:40:54.863416 1422398 cri.go:89] found id: ""
	I1209 05:40:54.863486 1422398 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1209 05:40:54.871043 1422398 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 05:40:54.878854 1422398 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1209 05:40:54.878952 1422398 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 05:40:54.886794 1422398 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 05:40:54.886847 1422398 kubeadm.go:158] found existing configuration files:
	
	I1209 05:40:54.886908 1422398 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1209 05:40:54.894435 1422398 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 05:40:54.894550 1422398 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 05:40:54.901704 1422398 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1209 05:40:54.909273 1422398 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 05:40:54.909385 1422398 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 05:40:54.916897 1422398 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1209 05:40:54.924926 1422398 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 05:40:54.925024 1422398 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 05:40:54.932137 1422398 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1209 05:40:54.939823 1422398 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 05:40:54.939911 1422398 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 05:40:54.947153 1422398 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1209 05:40:54.985945 1422398 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1209 05:40:54.986006 1422398 kubeadm.go:319] [preflight] Running pre-flight checks
	I1209 05:40:55.098038 1422398 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1209 05:40:55.098124 1422398 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1209 05:40:55.098168 1422398 kubeadm.go:319] OS: Linux
	I1209 05:40:55.098224 1422398 kubeadm.go:319] CGROUPS_CPU: enabled
	I1209 05:40:55.098279 1422398 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1209 05:40:55.098332 1422398 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1209 05:40:55.098392 1422398 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1209 05:40:55.098445 1422398 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1209 05:40:55.098502 1422398 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1209 05:40:55.098554 1422398 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1209 05:40:55.098607 1422398 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1209 05:40:55.098661 1422398 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1209 05:40:55.213327 1422398 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1209 05:40:55.213517 1422398 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1209 05:40:55.213698 1422398 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1209 05:40:55.232400 1422398 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1209 05:40:55.239096 1422398 out.go:252]   - Generating certificates and keys ...
	I1209 05:40:55.239277 1422398 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1209 05:40:55.239377 1422398 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1209 05:40:55.754714 1422398 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1209 05:40:56.183780 1422398 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1209 05:40:56.537089 1422398 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1209 05:40:56.838991 1422398 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1209 05:40:57.144061 1422398 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1209 05:40:57.144319 1422398 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-262540] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1209 05:40:57.237080 1422398 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1209 05:40:57.237305 1422398 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-262540] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1209 05:40:57.410307 1422398 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1209 05:40:57.494105 1422398 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1209 05:40:57.828849 1422398 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1209 05:40:57.829173 1422398 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1209 05:40:58.186047 1422398 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1209 05:40:58.553535 1422398 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1209 05:40:58.846953 1422398 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1209 05:40:59.216978 1422398 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1209 05:40:59.442501 1422398 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1209 05:40:59.443253 1422398 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1209 05:40:59.445958 1422398 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1209 05:40:59.449559 1422398 out.go:252]   - Booting up control plane ...
	I1209 05:40:59.449660 1422398 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1209 05:40:59.449739 1422398 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1209 05:40:59.449809 1422398 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1209 05:40:59.466855 1422398 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1209 05:40:59.467191 1422398 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1209 05:40:59.475169 1422398 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1209 05:40:59.475483 1422398 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1209 05:40:59.475706 1422398 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1209 05:40:59.606469 1422398 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1209 05:40:59.606609 1422398 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1209 05:44:59.607480 1422398 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.00121326s
	I1209 05:44:59.607519 1422398 kubeadm.go:319] 
	I1209 05:44:59.607618 1422398 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1209 05:44:59.607726 1422398 kubeadm.go:319] 	- The kubelet is not running
	I1209 05:44:59.607978 1422398 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1209 05:44:59.607986 1422398 kubeadm.go:319] 
	I1209 05:44:59.608454 1422398 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1209 05:44:59.608521 1422398 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1209 05:44:59.608577 1422398 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1209 05:44:59.608582 1422398 kubeadm.go:319] 
	I1209 05:44:59.613231 1422398 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1209 05:44:59.613828 1422398 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1209 05:44:59.613984 1422398 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1209 05:44:59.614238 1422398 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1209 05:44:59.614252 1422398 kubeadm.go:319] 
	I1209 05:44:59.614382 1422398 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1209 05:44:59.614451 1422398 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-262540] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-262540] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00121326s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-262540] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-262540] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00121326s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1209 05:44:59.614533 1422398 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1209 05:45:00.103679 1422398 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 05:45:00.173261 1422398 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1209 05:45:00.173416 1422398 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 05:45:00.208469 1422398 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 05:45:00.208544 1422398 kubeadm.go:158] found existing configuration files:
	
	I1209 05:45:00.208645 1422398 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1209 05:45:00.241859 1422398 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 05:45:00.241972 1422398 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 05:45:00.286462 1422398 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1209 05:45:00.323140 1422398 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 05:45:00.323227 1422398 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 05:45:00.375275 1422398 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1209 05:45:00.422213 1422398 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 05:45:00.422297 1422398 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 05:45:00.482732 1422398 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1209 05:45:00.551039 1422398 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 05:45:00.551196 1422398 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 05:45:00.603184 1422398 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1209 05:45:00.737020 1422398 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1209 05:45:00.737088 1422398 kubeadm.go:319] [preflight] Running pre-flight checks
	I1209 05:45:00.854575 1422398 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1209 05:45:00.854658 1422398 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1209 05:45:00.854699 1422398 kubeadm.go:319] OS: Linux
	I1209 05:45:00.854747 1422398 kubeadm.go:319] CGROUPS_CPU: enabled
	I1209 05:45:00.854798 1422398 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1209 05:45:00.854848 1422398 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1209 05:45:00.854898 1422398 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1209 05:45:00.854948 1422398 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1209 05:45:00.854997 1422398 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1209 05:45:00.855044 1422398 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1209 05:45:00.855095 1422398 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1209 05:45:00.855143 1422398 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1209 05:45:00.931863 1422398 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1209 05:45:00.931972 1422398 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1209 05:45:00.932087 1422398 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1209 05:45:00.939118 1422398 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1209 05:45:00.942776 1422398 out.go:252]   - Generating certificates and keys ...
	I1209 05:45:00.942945 1422398 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1209 05:45:00.943041 1422398 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1209 05:45:00.943160 1422398 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1209 05:45:00.943259 1422398 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1209 05:45:00.943342 1422398 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1209 05:45:00.943403 1422398 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1209 05:45:00.943494 1422398 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1209 05:45:00.943590 1422398 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1209 05:45:00.943707 1422398 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1209 05:45:00.943793 1422398 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1209 05:45:00.944009 1422398 kubeadm.go:319] [certs] Using the existing "sa" key
	I1209 05:45:00.944161 1422398 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1209 05:45:01.208491 1422398 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1209 05:45:01.530404 1422398 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1209 05:45:01.608144 1422398 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1209 05:45:02.097879 1422398 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1209 05:45:02.557838 1422398 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1209 05:45:02.558503 1422398 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1209 05:45:02.561184 1422398 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1209 05:45:02.564240 1422398 out.go:252]   - Booting up control plane ...
	I1209 05:45:02.564359 1422398 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1209 05:45:02.564446 1422398 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1209 05:45:02.566129 1422398 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1209 05:45:02.587668 1422398 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1209 05:45:02.588156 1422398 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1209 05:45:02.596446 1422398 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1209 05:45:02.596549 1422398 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1209 05:45:02.596595 1422398 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1209 05:45:02.726040 1422398 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1209 05:45:02.726160 1422398 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1209 05:49:02.727137 1422398 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001512454s
	I1209 05:49:02.727164 1422398 kubeadm.go:319] 
	I1209 05:49:02.727221 1422398 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1209 05:49:02.727255 1422398 kubeadm.go:319] 	- The kubelet is not running
	I1209 05:49:02.727360 1422398 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1209 05:49:02.727366 1422398 kubeadm.go:319] 
	I1209 05:49:02.727470 1422398 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1209 05:49:02.727502 1422398 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1209 05:49:02.727533 1422398 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1209 05:49:02.727537 1422398 kubeadm.go:319] 
	I1209 05:49:02.737230 1422398 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1209 05:49:02.737653 1422398 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1209 05:49:02.737766 1422398 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1209 05:49:02.738004 1422398 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1209 05:49:02.738014 1422398 kubeadm.go:319] 
	I1209 05:49:02.738083 1422398 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1209 05:49:02.738133 1422398 kubeadm.go:403] duration metric: took 8m7.90723854s to StartCluster
	I1209 05:49:02.738172 1422398 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:49:02.738235 1422398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:49:02.766379 1422398 cri.go:89] found id: ""
	I1209 05:49:02.766408 1422398 logs.go:282] 0 containers: []
	W1209 05:49:02.766416 1422398 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:49:02.766423 1422398 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:49:02.766487 1422398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:49:02.790398 1422398 cri.go:89] found id: ""
	I1209 05:49:02.790423 1422398 logs.go:282] 0 containers: []
	W1209 05:49:02.790431 1422398 logs.go:284] No container was found matching "etcd"
	I1209 05:49:02.790437 1422398 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:49:02.790493 1422398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:49:02.814861 1422398 cri.go:89] found id: ""
	I1209 05:49:02.814885 1422398 logs.go:282] 0 containers: []
	W1209 05:49:02.814894 1422398 logs.go:284] No container was found matching "coredns"
	I1209 05:49:02.814900 1422398 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:49:02.814958 1422398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:49:02.838939 1422398 cri.go:89] found id: ""
	I1209 05:49:02.838964 1422398 logs.go:282] 0 containers: []
	W1209 05:49:02.838973 1422398 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:49:02.838979 1422398 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:49:02.839047 1422398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:49:02.863333 1422398 cri.go:89] found id: ""
	I1209 05:49:02.863398 1422398 logs.go:282] 0 containers: []
	W1209 05:49:02.863421 1422398 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:49:02.863440 1422398 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:49:02.863527 1422398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:49:02.887104 1422398 cri.go:89] found id: ""
	I1209 05:49:02.887134 1422398 logs.go:282] 0 containers: []
	W1209 05:49:02.887152 1422398 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:49:02.887159 1422398 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:49:02.887226 1422398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:49:02.911004 1422398 cri.go:89] found id: ""
	I1209 05:49:02.911031 1422398 logs.go:282] 0 containers: []
	W1209 05:49:02.911039 1422398 logs.go:284] No container was found matching "kindnet"
	I1209 05:49:02.911049 1422398 logs.go:123] Gathering logs for kubelet ...
	I1209 05:49:02.911061 1422398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:49:02.967158 1422398 logs.go:123] Gathering logs for dmesg ...
	I1209 05:49:02.967192 1422398 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:49:02.983481 1422398 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:49:02.983507 1422398 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:49:03.049617 1422398 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:49:03.041414    4809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:49:03.042038    4809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:49:03.043805    4809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:49:03.044324    4809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:49:03.045788    4809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:49:03.041414    4809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:49:03.042038    4809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:49:03.043805    4809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:49:03.044324    4809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:49:03.045788    4809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:49:03.049650 1422398 logs.go:123] Gathering logs for containerd ...
	I1209 05:49:03.049664 1422398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:49:03.089335 1422398 logs.go:123] Gathering logs for container status ...
	I1209 05:49:03.089371 1422398 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1209 05:49:03.115730 1422398 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001512454s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1209 05:49:03.115806 1422398 out.go:285] * 
	* 
	W1209 05:49:03.116006 1422398 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001512454s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001512454s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1209 05:49:03.116054 1422398 out.go:285] * 
	* 
	W1209 05:49:03.118239 1422398 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 05:49:03.123846 1422398 out.go:203] 
	W1209 05:49:03.126762 1422398 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001512454s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001512454s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1209 05:49:03.126805 1422398 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1209 05:49:03.126825 1422398 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1209 05:49:03.129920 1422398 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:186: failed starting minikube -first start-. args "out/minikube-linux-arm64 start -p newest-cni-262540 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/FirstStart]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/FirstStart]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-262540
helpers_test.go:243: (dbg) docker inspect newest-cni-262540:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ed3de5d59c962edb9b6bef3201cdaec7fe174aa520c616b7fefbc3014b60f5d7",
	        "Created": "2025-12-09T05:40:46.656747886Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1422815,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-09T05:40:46.750006721Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:e4eb91ed18a24161fce60c7cdd660144ecd5b8c5029dc2dea2c5e423c2f48ce4",
	        "ResolvConfPath": "/var/lib/docker/containers/ed3de5d59c962edb9b6bef3201cdaec7fe174aa520c616b7fefbc3014b60f5d7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ed3de5d59c962edb9b6bef3201cdaec7fe174aa520c616b7fefbc3014b60f5d7/hostname",
	        "HostsPath": "/var/lib/docker/containers/ed3de5d59c962edb9b6bef3201cdaec7fe174aa520c616b7fefbc3014b60f5d7/hosts",
	        "LogPath": "/var/lib/docker/containers/ed3de5d59c962edb9b6bef3201cdaec7fe174aa520c616b7fefbc3014b60f5d7/ed3de5d59c962edb9b6bef3201cdaec7fe174aa520c616b7fefbc3014b60f5d7-json.log",
	        "Name": "/newest-cni-262540",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-262540:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-262540",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ed3de5d59c962edb9b6bef3201cdaec7fe174aa520c616b7fefbc3014b60f5d7",
	                "LowerDir": "/var/lib/docker/overlay2/6415c7c451ba9f1315edabee60b733d482ef79cadbd27911dea3809654b6c1ec-init/diff:/var/lib/docker/overlay2/c44bb57aa59cc265266f37f2bb6e7ec0e7d641c3b4aeaa57e6d23deec6f0d1d4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6415c7c451ba9f1315edabee60b733d482ef79cadbd27911dea3809654b6c1ec/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6415c7c451ba9f1315edabee60b733d482ef79cadbd27911dea3809654b6c1ec/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6415c7c451ba9f1315edabee60b733d482ef79cadbd27911dea3809654b6c1ec/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-262540",
	                "Source": "/var/lib/docker/volumes/newest-cni-262540/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-262540",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-262540",
	                "name.minikube.sigs.k8s.io": "newest-cni-262540",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9954a06c834f33e28b10a23b7f87c831e396c1056f7a6615dc76e0d514d93454",
	            "SandboxKey": "/var/run/docker/netns/9954a06c834f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34205"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34206"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34209"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34207"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34208"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-262540": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ba:02:a6:df:bc:8f",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "aa89e26051ba524ceb1352e47e7602df84b3dfd74bbc435c72069a1036fceebf",
	                    "EndpointID": "efb22bfc5d2fa7cd356d48b051835d563f10405c6482b333b29bcce636ebb681",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-262540",
	                        "ed3de5d59c96"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-262540 -n newest-cni-262540
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-262540 -n newest-cni-262540: exit status 6 (330.00328ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1209 05:49:03.525081 1434651 status.go:458] kubeconfig endpoint: get endpoint: "newest-cni-262540" does not appear in /home/jenkins/minikube-integration/22081-1142328/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/FirstStart FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/FirstStart]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-262540 logs -n 25
helpers_test.go:260: TestStartStop/group/newest-cni/serial/FirstStart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─
────────────────────┐
	│ COMMAND │                                                                                                                            ARGS                                                                                                                            │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─
────────────────────┤
	│ start   │ -p no-preload-842269 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-842269            │ jenkins │ v1.37.0 │ 09 Dec 25 05:35 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-432108 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                   │ embed-certs-432108           │ jenkins │ v1.37.0 │ 09 Dec 25 05:36 UTC │ 09 Dec 25 05:36 UTC │
	│ stop    │ -p embed-certs-432108 --alsologtostderr -v=3                                                                                                                                                                                                               │ embed-certs-432108           │ jenkins │ v1.37.0 │ 09 Dec 25 05:36 UTC │ 09 Dec 25 05:36 UTC │
	│ addons  │ enable dashboard -p embed-certs-432108 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                              │ embed-certs-432108           │ jenkins │ v1.37.0 │ 09 Dec 25 05:36 UTC │ 09 Dec 25 05:36 UTC │
	│ start   │ -p embed-certs-432108 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-432108           │ jenkins │ v1.37.0 │ 09 Dec 25 05:36 UTC │ 09 Dec 25 05:37 UTC │
	│ image   │ embed-certs-432108 image list --format=json                                                                                                                                                                                                                │ embed-certs-432108           │ jenkins │ v1.37.0 │ 09 Dec 25 05:37 UTC │ 09 Dec 25 05:37 UTC │
	│ pause   │ -p embed-certs-432108 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-432108           │ jenkins │ v1.37.0 │ 09 Dec 25 05:37 UTC │ 09 Dec 25 05:37 UTC │
	│ unpause │ -p embed-certs-432108 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-432108           │ jenkins │ v1.37.0 │ 09 Dec 25 05:37 UTC │ 09 Dec 25 05:37 UTC │
	│ delete  │ -p embed-certs-432108                                                                                                                                                                                                                                      │ embed-certs-432108           │ jenkins │ v1.37.0 │ 09 Dec 25 05:37 UTC │ 09 Dec 25 05:37 UTC │
	│ delete  │ -p embed-certs-432108                                                                                                                                                                                                                                      │ embed-certs-432108           │ jenkins │ v1.37.0 │ 09 Dec 25 05:37 UTC │ 09 Dec 25 05:37 UTC │
	│ start   │ -p default-k8s-diff-port-564611 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-564611 │ jenkins │ v1.37.0 │ 09 Dec 25 05:37 UTC │ 09 Dec 25 05:39 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-564611 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                         │ default-k8s-diff-port-564611 │ jenkins │ v1.37.0 │ 09 Dec 25 05:39 UTC │ 09 Dec 25 05:39 UTC │
	│ stop    │ -p default-k8s-diff-port-564611 --alsologtostderr -v=3                                                                                                                                                                                                     │ default-k8s-diff-port-564611 │ jenkins │ v1.37.0 │ 09 Dec 25 05:39 UTC │ 09 Dec 25 05:39 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-564611 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ default-k8s-diff-port-564611 │ jenkins │ v1.37.0 │ 09 Dec 25 05:39 UTC │ 09 Dec 25 05:39 UTC │
	│ start   │ -p default-k8s-diff-port-564611 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-564611 │ jenkins │ v1.37.0 │ 09 Dec 25 05:39 UTC │ 09 Dec 25 05:40 UTC │
	│ image   │ default-k8s-diff-port-564611 image list --format=json                                                                                                                                                                                                      │ default-k8s-diff-port-564611 │ jenkins │ v1.37.0 │ 09 Dec 25 05:40 UTC │ 09 Dec 25 05:40 UTC │
	│ pause   │ -p default-k8s-diff-port-564611 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-564611 │ jenkins │ v1.37.0 │ 09 Dec 25 05:40 UTC │ 09 Dec 25 05:40 UTC │
	│ unpause │ -p default-k8s-diff-port-564611 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-564611 │ jenkins │ v1.37.0 │ 09 Dec 25 05:40 UTC │ 09 Dec 25 05:40 UTC │
	│ delete  │ -p default-k8s-diff-port-564611                                                                                                                                                                                                                            │ default-k8s-diff-port-564611 │ jenkins │ v1.37.0 │ 09 Dec 25 05:40 UTC │ 09 Dec 25 05:40 UTC │
	│ delete  │ -p default-k8s-diff-port-564611                                                                                                                                                                                                                            │ default-k8s-diff-port-564611 │ jenkins │ v1.37.0 │ 09 Dec 25 05:40 UTC │ 09 Dec 25 05:40 UTC │
	│ start   │ -p newest-cni-262540 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-262540            │ jenkins │ v1.37.0 │ 09 Dec 25 05:40 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-842269 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ no-preload-842269            │ jenkins │ v1.37.0 │ 09 Dec 25 05:43 UTC │                     │
	│ stop    │ -p no-preload-842269 --alsologtostderr -v=3                                                                                                                                                                                                                │ no-preload-842269            │ jenkins │ v1.37.0 │ 09 Dec 25 05:45 UTC │ 09 Dec 25 05:45 UTC │
	│ addons  │ enable dashboard -p no-preload-842269 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                               │ no-preload-842269            │ jenkins │ v1.37.0 │ 09 Dec 25 05:45 UTC │ 09 Dec 25 05:45 UTC │
	│ start   │ -p no-preload-842269 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-842269            │ jenkins │ v1.37.0 │ 09 Dec 25 05:45 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─
────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/09 05:45:19
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 05:45:19.304985 1429857 out.go:360] Setting OutFile to fd 1 ...
	I1209 05:45:19.305094 1429857 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 05:45:19.305101 1429857 out.go:374] Setting ErrFile to fd 2...
	I1209 05:45:19.305106 1429857 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 05:45:19.305469 1429857 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-1142328/.minikube/bin
	I1209 05:45:19.305897 1429857 out.go:368] Setting JSON to false
	I1209 05:45:19.307371 1429857 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":30443,"bootTime":1765228677,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1209 05:45:19.307474 1429857 start.go:143] virtualization:  
	I1209 05:45:19.312362 1429857 out.go:179] * [no-preload-842269] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1209 05:45:19.315432 1429857 out.go:179]   - MINIKUBE_LOCATION=22081
	I1209 05:45:19.315644 1429857 notify.go:221] Checking for updates...
	I1209 05:45:19.321156 1429857 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 05:45:19.324049 1429857 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22081-1142328/kubeconfig
	I1209 05:45:19.326954 1429857 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-1142328/.minikube
	I1209 05:45:19.329810 1429857 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1209 05:45:19.332669 1429857 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 05:45:19.336051 1429857 config.go:182] Loaded profile config "no-preload-842269": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1209 05:45:19.336708 1429857 driver.go:422] Setting default libvirt URI to qemu:///system
	I1209 05:45:19.364223 1429857 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1209 05:45:19.364347 1429857 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 05:45:19.423199 1429857 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-09 05:45:19.414226912 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1209 05:45:19.423304 1429857 docker.go:319] overlay module found
	I1209 05:45:19.426467 1429857 out.go:179] * Using the docker driver based on existing profile
	I1209 05:45:19.429450 1429857 start.go:309] selected driver: docker
	I1209 05:45:19.429469 1429857 start.go:927] validating driver "docker" against &{Name:no-preload-842269 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-842269 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2
62144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 05:45:19.429573 1429857 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 05:45:19.430271 1429857 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 05:45:19.484934 1429857 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-09 05:45:19.476108747 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1209 05:45:19.485260 1429857 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 05:45:19.485294 1429857 cni.go:84] Creating CNI manager for ""
	I1209 05:45:19.485352 1429857 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1209 05:45:19.485394 1429857 start.go:353] cluster config:
	{Name:no-preload-842269 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-842269 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 05:45:19.488591 1429857 out.go:179] * Starting "no-preload-842269" primary control-plane node in "no-preload-842269" cluster
	I1209 05:45:19.491427 1429857 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1209 05:45:19.494310 1429857 out.go:179] * Pulling base image v0.0.48-1765184860-22066 ...
	I1209 05:45:19.497153 1429857 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1209 05:45:19.497221 1429857 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local docker daemon
	I1209 05:45:19.497291 1429857 profile.go:143] Saving config to /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/no-preload-842269/config.json ...
	I1209 05:45:19.497571 1429857 cache.go:107] acquiring lock: {Name:mkf65d4ffaf3daf987b7ba0301a9962f00106981 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 05:45:19.497666 1429857 cache.go:115] /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1209 05:45:19.497678 1429857 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22081-1142328/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 116.666µs
	I1209 05:45:19.497690 1429857 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1209 05:45:19.497702 1429857 cache.go:107] acquiring lock: {Name:mk4d0c4ab95f11691dbecfbd7b2c72b3028abf9f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 05:45:19.497735 1429857 cache.go:115] /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1209 05:45:19.497745 1429857 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22081-1142328/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 45.152µs
	I1209 05:45:19.497752 1429857 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1209 05:45:19.497766 1429857 cache.go:107] acquiring lock: {Name:mk7cb8e420e05ffddcb417dedf3ddace46afcf1b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 05:45:19.497807 1429857 cache.go:115] /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1209 05:45:19.497815 1429857 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22081-1142328/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 50.033µs
	I1209 05:45:19.497822 1429857 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1209 05:45:19.497835 1429857 cache.go:107] acquiring lock: {Name:mka2eb1b7c29ae7ae604d5f65c47b25198cfb45b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 05:45:19.497867 1429857 cache.go:115] /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1209 05:45:19.497876 1429857 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22081-1142328/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 42.009µs
	I1209 05:45:19.497883 1429857 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1209 05:45:19.497892 1429857 cache.go:107] acquiring lock: {Name:mkade1779cb2ecc1c54a36bd1719bf2ef87bdf51 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 05:45:19.497922 1429857 cache.go:115] /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1209 05:45:19.497931 1429857 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22081-1142328/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 40.704µs
	I1209 05:45:19.497942 1429857 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1209 05:45:19.497955 1429857 cache.go:107] acquiring lock: {Name:mk604b76e7428f7b39bf507a7086fea810617cc7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 05:45:19.497987 1429857 cache.go:115] /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1209 05:45:19.497996 1429857 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22081-1142328/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 42.46µs
	I1209 05:45:19.498002 1429857 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1209 05:45:19.498011 1429857 cache.go:107] acquiring lock: {Name:mk605cb0bdcc667f1a6cc01dc2d318b41822c88f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 05:45:19.498037 1429857 cache.go:115] /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1209 05:45:19.498046 1429857 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/22081-1142328/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 36.306µs
	I1209 05:45:19.498052 1429857 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1209 05:45:19.498060 1429857 cache.go:107] acquiring lock: {Name:mk288542758fec96b5cb8ac3de75700c31bfbfc0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 05:45:19.498089 1429857 cache.go:115] /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1209 05:45:19.498098 1429857 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22081-1142328/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 38.916µs
	I1209 05:45:19.498104 1429857 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1209 05:45:19.498110 1429857 cache.go:87] Successfully saved all images to host disk.
	I1209 05:45:19.517152 1429857 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local docker daemon, skipping pull
	I1209 05:45:19.517175 1429857 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c exists in daemon, skipping load
	I1209 05:45:19.517194 1429857 cache.go:243] Successfully downloaded all kic artifacts
	I1209 05:45:19.517225 1429857 start.go:360] acquireMachinesLock for no-preload-842269: {Name:mk19b7be61094a19b29603fb95f6d7b282529614 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 05:45:19.517288 1429857 start.go:364] duration metric: took 43.707µs to acquireMachinesLock for "no-preload-842269"
	I1209 05:45:19.517311 1429857 start.go:96] Skipping create...Using existing machine configuration
	I1209 05:45:19.517320 1429857 fix.go:54] fixHost starting: 
	I1209 05:45:19.517582 1429857 cli_runner.go:164] Run: docker container inspect no-preload-842269 --format={{.State.Status}}
	I1209 05:45:19.535058 1429857 fix.go:112] recreateIfNeeded on no-preload-842269: state=Stopped err=<nil>
	W1209 05:45:19.535086 1429857 fix.go:138] unexpected machine state, will restart: <nil>
	I1209 05:45:19.538423 1429857 out.go:252] * Restarting existing docker container for "no-preload-842269" ...
	I1209 05:45:19.538508 1429857 cli_runner.go:164] Run: docker start no-preload-842269
	I1209 05:45:19.801093 1429857 cli_runner.go:164] Run: docker container inspect no-preload-842269 --format={{.State.Status}}
	I1209 05:45:19.824109 1429857 kic.go:430] container "no-preload-842269" state is running.
	I1209 05:45:19.824800 1429857 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-842269
	I1209 05:45:19.850927 1429857 profile.go:143] Saving config to /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/no-preload-842269/config.json ...
	I1209 05:45:19.851169 1429857 machine.go:94] provisionDockerMachine start ...
	I1209 05:45:19.851233 1429857 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-842269
	I1209 05:45:19.872359 1429857 main.go:143] libmachine: Using SSH client type: native
	I1209 05:45:19.872683 1429857 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 34210 <nil> <nil>}
	I1209 05:45:19.872698 1429857 main.go:143] libmachine: About to run SSH command:
	hostname
	I1209 05:45:19.873510 1429857 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1209 05:45:23.031698 1429857 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-842269
	
	I1209 05:45:23.031723 1429857 ubuntu.go:182] provisioning hostname "no-preload-842269"
	I1209 05:45:23.031788 1429857 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-842269
	I1209 05:45:23.049528 1429857 main.go:143] libmachine: Using SSH client type: native
	I1209 05:45:23.049842 1429857 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 34210 <nil> <nil>}
	I1209 05:45:23.049866 1429857 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-842269 && echo "no-preload-842269" | sudo tee /etc/hostname
	I1209 05:45:23.212560 1429857 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-842269
	
	I1209 05:45:23.212638 1429857 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-842269
	I1209 05:45:23.230939 1429857 main.go:143] libmachine: Using SSH client type: native
	I1209 05:45:23.231248 1429857 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 34210 <nil> <nil>}
	I1209 05:45:23.231264 1429857 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-842269' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-842269/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-842269' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 05:45:23.384444 1429857 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1209 05:45:23.384483 1429857 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22081-1142328/.minikube CaCertPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22081-1142328/.minikube}
	I1209 05:45:23.384506 1429857 ubuntu.go:190] setting up certificates
	I1209 05:45:23.384523 1429857 provision.go:84] configureAuth start
	I1209 05:45:23.384590 1429857 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-842269
	I1209 05:45:23.401432 1429857 provision.go:143] copyHostCerts
	I1209 05:45:23.401503 1429857 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.pem, removing ...
	I1209 05:45:23.401518 1429857 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.pem
	I1209 05:45:23.401593 1429857 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.pem (1078 bytes)
	I1209 05:45:23.401705 1429857 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-1142328/.minikube/cert.pem, removing ...
	I1209 05:45:23.401714 1429857 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-1142328/.minikube/cert.pem
	I1209 05:45:23.401742 1429857 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22081-1142328/.minikube/cert.pem (1123 bytes)
	I1209 05:45:23.401834 1429857 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-1142328/.minikube/key.pem, removing ...
	I1209 05:45:23.401844 1429857 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-1142328/.minikube/key.pem
	I1209 05:45:23.401870 1429857 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22081-1142328/.minikube/key.pem (1675 bytes)
	I1209 05:45:23.401918 1429857 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca-key.pem org=jenkins.no-preload-842269 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-842269]
	I1209 05:45:24.117829 1429857 provision.go:177] copyRemoteCerts
	I1209 05:45:24.117899 1429857 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 05:45:24.117948 1429857 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-842269
	I1209 05:45:24.136847 1429857 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34210 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/no-preload-842269/id_rsa Username:docker}
	I1209 05:45:24.243917 1429857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1209 05:45:24.261228 1429857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1209 05:45:24.278688 1429857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1209 05:45:24.295602 1429857 provision.go:87] duration metric: took 911.052498ms to configureAuth
	I1209 05:45:24.295630 1429857 ubuntu.go:206] setting minikube options for container-runtime
	I1209 05:45:24.295821 1429857 config.go:182] Loaded profile config "no-preload-842269": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1209 05:45:24.295834 1429857 machine.go:97] duration metric: took 4.444658101s to provisionDockerMachine
	I1209 05:45:24.295843 1429857 start.go:293] postStartSetup for "no-preload-842269" (driver="docker")
	I1209 05:45:24.295853 1429857 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 05:45:24.295939 1429857 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 05:45:24.295989 1429857 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-842269
	I1209 05:45:24.313358 1429857 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34210 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/no-preload-842269/id_rsa Username:docker}
	I1209 05:45:24.419729 1429857 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 05:45:24.423044 1429857 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1209 05:45:24.423074 1429857 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1209 05:45:24.423102 1429857 filesync.go:126] Scanning /home/jenkins/minikube-integration/22081-1142328/.minikube/addons for local assets ...
	I1209 05:45:24.423160 1429857 filesync.go:126] Scanning /home/jenkins/minikube-integration/22081-1142328/.minikube/files for local assets ...
	I1209 05:45:24.423286 1429857 filesync.go:149] local asset: /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/ssl/certs/11442312.pem -> 11442312.pem in /etc/ssl/certs
	I1209 05:45:24.423403 1429857 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 05:45:24.430577 1429857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/ssl/certs/11442312.pem --> /etc/ssl/certs/11442312.pem (1708 bytes)
	I1209 05:45:24.448642 1429857 start.go:296] duration metric: took 152.783704ms for postStartSetup
	I1209 05:45:24.448752 1429857 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1209 05:45:24.448804 1429857 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-842269
	I1209 05:45:24.475577 1429857 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34210 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/no-preload-842269/id_rsa Username:docker}
	I1209 05:45:24.577211 1429857 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1209 05:45:24.581897 1429857 fix.go:56] duration metric: took 5.064569479s for fixHost
	I1209 05:45:24.581929 1429857 start.go:83] releasing machines lock for "no-preload-842269", held for 5.064623763s
	I1209 05:45:24.582003 1429857 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-842269
	I1209 05:45:24.598849 1429857 ssh_runner.go:195] Run: cat /version.json
	I1209 05:45:24.598910 1429857 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-842269
	I1209 05:45:24.599176 1429857 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 05:45:24.599236 1429857 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-842269
	I1209 05:45:24.617491 1429857 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34210 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/no-preload-842269/id_rsa Username:docker}
	I1209 05:45:24.625861 1429857 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34210 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/no-preload-842269/id_rsa Username:docker}
	I1209 05:45:24.719702 1429857 ssh_runner.go:195] Run: systemctl --version
	I1209 05:45:24.811867 1429857 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 05:45:24.816351 1429857 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 05:45:24.816436 1429857 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 05:45:24.824370 1429857 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1209 05:45:24.824393 1429857 start.go:496] detecting cgroup driver to use...
	I1209 05:45:24.824424 1429857 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1209 05:45:24.824478 1429857 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1209 05:45:24.842259 1429857 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1209 05:45:24.856877 1429857 docker.go:218] disabling cri-docker service (if available) ...
	I1209 05:45:24.856943 1429857 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 05:45:24.872872 1429857 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 05:45:24.886154 1429857 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 05:45:24.999208 1429857 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 05:45:25.121326 1429857 docker.go:234] disabling docker service ...
	I1209 05:45:25.121413 1429857 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 05:45:25.137073 1429857 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 05:45:25.150656 1429857 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 05:45:25.286510 1429857 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 05:45:25.394076 1429857 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 05:45:25.406549 1429857 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 05:45:25.420965 1429857 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1209 05:45:25.429321 1429857 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1209 05:45:25.437986 1429857 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1209 05:45:25.438077 1429857 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1209 05:45:25.447132 1429857 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1209 05:45:25.456037 1429857 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1209 05:45:25.464470 1429857 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1209 05:45:25.472760 1429857 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 05:45:25.480756 1429857 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1209 05:45:25.489194 1429857 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1209 05:45:25.497557 1429857 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1209 05:45:25.506153 1429857 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 05:45:25.513357 1429857 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 05:45:25.520101 1429857 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 05:45:25.626477 1429857 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1209 05:45:25.729432 1429857 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1209 05:45:25.729500 1429857 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1209 05:45:25.733824 1429857 start.go:564] Will wait 60s for crictl version
	I1209 05:45:25.733937 1429857 ssh_runner.go:195] Run: which crictl
	I1209 05:45:25.738223 1429857 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1209 05:45:25.764110 1429857 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1209 05:45:25.764179 1429857 ssh_runner.go:195] Run: containerd --version
	I1209 05:45:25.784097 1429857 ssh_runner.go:195] Run: containerd --version
	I1209 05:45:25.809525 1429857 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1209 05:45:25.812650 1429857 cli_runner.go:164] Run: docker network inspect no-preload-842269 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1209 05:45:25.828380 1429857 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1209 05:45:25.832220 1429857 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 05:45:25.842204 1429857 kubeadm.go:884] updating cluster {Name:no-preload-842269 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-842269 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 05:45:25.842335 1429857 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1209 05:45:25.842398 1429857 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 05:45:25.869412 1429857 containerd.go:627] all images are preloaded for containerd runtime.
	I1209 05:45:25.869438 1429857 cache_images.go:86] Images are preloaded, skipping loading
	I1209 05:45:25.869445 1429857 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1209 05:45:25.869544 1429857 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-842269 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-842269 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 05:45:25.869609 1429857 ssh_runner.go:195] Run: sudo crictl info
	I1209 05:45:25.894672 1429857 cni.go:84] Creating CNI manager for ""
	I1209 05:45:25.894698 1429857 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1209 05:45:25.894720 1429857 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1209 05:45:25.894751 1429857 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-842269 NodeName:no-preload-842269 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1209 05:45:25.894907 1429857 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-842269"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 05:45:25.894981 1429857 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1209 05:45:25.902766 1429857 binaries.go:51] Found k8s binaries, skipping transfer
	I1209 05:45:25.902838 1429857 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1209 05:45:25.910455 1429857 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1209 05:45:25.923076 1429857 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1209 05:45:25.937650 1429857 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1209 05:45:25.951420 1429857 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1209 05:45:25.955331 1429857 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 05:45:25.964795 1429857 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 05:45:26.082166 1429857 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 05:45:26.100679 1429857 certs.go:69] Setting up /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/no-preload-842269 for IP: 192.168.85.2
	I1209 05:45:26.100745 1429857 certs.go:195] generating shared ca certs ...
	I1209 05:45:26.100786 1429857 certs.go:227] acquiring lock for ca certs: {Name:mk15788702f8c4e23b5aeab3f44961d296fab259 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 05:45:26.100943 1429857 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.key
	I1209 05:45:26.101025 1429857 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/proxy-client-ca.key
	I1209 05:45:26.101056 1429857 certs.go:257] generating profile certs ...
	I1209 05:45:26.101186 1429857 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/no-preload-842269/client.key
	I1209 05:45:26.101295 1429857 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/no-preload-842269/apiserver.key.135a6aab
	I1209 05:45:26.101368 1429857 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/no-preload-842269/proxy-client.key
	I1209 05:45:26.101513 1429857 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/1144231.pem (1338 bytes)
	W1209 05:45:26.101579 1429857 certs.go:480] ignoring /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/1144231_empty.pem, impossibly tiny 0 bytes
	I1209 05:45:26.101605 1429857 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca-key.pem (1679 bytes)
	I1209 05:45:26.101652 1429857 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem (1078 bytes)
	I1209 05:45:26.101704 1429857 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/cert.pem (1123 bytes)
	I1209 05:45:26.101777 1429857 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/key.pem (1675 bytes)
	I1209 05:45:26.101861 1429857 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/ssl/certs/11442312.pem (1708 bytes)
	I1209 05:45:26.102562 1429857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 05:45:26.122800 1429857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1209 05:45:26.142042 1429857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 05:45:26.161502 1429857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1209 05:45:26.179586 1429857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/no-preload-842269/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1209 05:45:26.196698 1429857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/no-preload-842269/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1209 05:45:26.212945 1429857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/no-preload-842269/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 05:45:26.230416 1429857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/no-preload-842269/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1209 05:45:26.247147 1429857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/ssl/certs/11442312.pem --> /usr/share/ca-certificates/11442312.pem (1708 bytes)
	I1209 05:45:26.265734 1429857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 05:45:26.282961 1429857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/1144231.pem --> /usr/share/ca-certificates/1144231.pem (1338 bytes)
	I1209 05:45:26.300125 1429857 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 05:45:26.312156 1429857 ssh_runner.go:195] Run: openssl version
	I1209 05:45:26.318566 1429857 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/11442312.pem
	I1209 05:45:26.329117 1429857 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/11442312.pem /etc/ssl/certs/11442312.pem
	I1209 05:45:26.336403 1429857 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11442312.pem
	I1209 05:45:26.340126 1429857 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 04:18 /usr/share/ca-certificates/11442312.pem
	I1209 05:45:26.340197 1429857 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11442312.pem
	I1209 05:45:26.383366 1429857 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1209 05:45:26.390871 1429857 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1209 05:45:26.398106 1429857 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1209 05:45:26.405814 1429857 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 05:45:26.409683 1429857 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 04:09 /usr/share/ca-certificates/minikubeCA.pem
	I1209 05:45:26.409750 1429857 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 05:45:26.450573 1429857 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1209 05:45:26.458322 1429857 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1144231.pem
	I1209 05:45:26.465833 1429857 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1144231.pem /etc/ssl/certs/1144231.pem
	I1209 05:45:26.473482 1429857 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1144231.pem
	I1209 05:45:26.477501 1429857 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 04:18 /usr/share/ca-certificates/1144231.pem
	I1209 05:45:26.477569 1429857 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1144231.pem
	I1209 05:45:26.518776 1429857 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1209 05:45:26.526248 1429857 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 05:45:26.529980 1429857 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1209 05:45:26.572441 1429857 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1209 05:45:26.613785 1429857 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1209 05:45:26.655322 1429857 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1209 05:45:26.696546 1429857 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1209 05:45:26.739135 1429857 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1209 05:45:26.780278 1429857 kubeadm.go:401] StartCluster: {Name:no-preload-842269 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-842269 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 05:45:26.780376 1429857 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1209 05:45:26.780450 1429857 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 05:45:26.805821 1429857 cri.go:89] found id: ""
	I1209 05:45:26.805924 1429857 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1209 05:45:26.813920 1429857 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1209 05:45:26.813941 1429857 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1209 05:45:26.814022 1429857 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1209 05:45:26.821515 1429857 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1209 05:45:26.821952 1429857 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-842269" does not appear in /home/jenkins/minikube-integration/22081-1142328/kubeconfig
	I1209 05:45:26.822061 1429857 kubeconfig.go:62] /home/jenkins/minikube-integration/22081-1142328/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-842269" cluster setting kubeconfig missing "no-preload-842269" context setting]
	I1209 05:45:26.822332 1429857 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1142328/kubeconfig: {Name:mk0dc127429f88dc4fdfb2d110deebc58207c1b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 05:45:26.823581 1429857 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1209 05:45:26.832049 1429857 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1209 05:45:26.832081 1429857 kubeadm.go:602] duration metric: took 18.134254ms to restartPrimaryControlPlane
	I1209 05:45:26.832090 1429857 kubeadm.go:403] duration metric: took 51.823986ms to StartCluster
	I1209 05:45:26.832105 1429857 settings.go:142] acquiring lock: {Name:mk8fa744e3d74bf8a1cbf5ac275c9f1969ad91a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 05:45:26.832161 1429857 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22081-1142328/kubeconfig
	I1209 05:45:26.832776 1429857 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1142328/kubeconfig: {Name:mk0dc127429f88dc4fdfb2d110deebc58207c1b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 05:45:26.832985 1429857 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1209 05:45:26.833266 1429857 config.go:182] Loaded profile config "no-preload-842269": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1209 05:45:26.833313 1429857 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1209 05:45:26.833376 1429857 addons.go:70] Setting storage-provisioner=true in profile "no-preload-842269"
	I1209 05:45:26.833395 1429857 addons.go:239] Setting addon storage-provisioner=true in "no-preload-842269"
	I1209 05:45:26.833419 1429857 host.go:66] Checking if "no-preload-842269" exists ...
	I1209 05:45:26.833892 1429857 cli_runner.go:164] Run: docker container inspect no-preload-842269 --format={{.State.Status}}
	I1209 05:45:26.834299 1429857 addons.go:70] Setting default-storageclass=true in profile "no-preload-842269"
	I1209 05:45:26.834336 1429857 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-842269"
	I1209 05:45:26.834606 1429857 cli_runner.go:164] Run: docker container inspect no-preload-842269 --format={{.State.Status}}
	I1209 05:45:26.836968 1429857 addons.go:70] Setting dashboard=true in profile "no-preload-842269"
	I1209 05:45:26.837045 1429857 addons.go:239] Setting addon dashboard=true in "no-preload-842269"
	W1209 05:45:26.837069 1429857 addons.go:248] addon dashboard should already be in state true
	I1209 05:45:26.837176 1429857 host.go:66] Checking if "no-preload-842269" exists ...
	I1209 05:45:26.838703 1429857 cli_runner.go:164] Run: docker container inspect no-preload-842269 --format={{.State.Status}}
	I1209 05:45:26.840073 1429857 out.go:179] * Verifying Kubernetes components...
	I1209 05:45:26.843169 1429857 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 05:45:26.862933 1429857 addons.go:239] Setting addon default-storageclass=true in "no-preload-842269"
	I1209 05:45:26.862982 1429857 host.go:66] Checking if "no-preload-842269" exists ...
	I1209 05:45:26.863397 1429857 cli_runner.go:164] Run: docker container inspect no-preload-842269 --format={{.State.Status}}
	I1209 05:45:26.876649 1429857 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 05:45:26.882278 1429857 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 05:45:26.882312 1429857 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1209 05:45:26.882383 1429857 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-842269
	I1209 05:45:26.897424 1429857 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1209 05:45:26.900169 1429857 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1209 05:45:26.906221 1429857 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1209 05:45:26.906259 1429857 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1209 05:45:26.906343 1429857 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-842269
	I1209 05:45:26.924326 1429857 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1209 05:45:26.924348 1429857 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1209 05:45:26.924420 1429857 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-842269
	I1209 05:45:26.963193 1429857 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34210 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/no-preload-842269/id_rsa Username:docker}
	I1209 05:45:26.976834 1429857 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34210 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/no-preload-842269/id_rsa Username:docker}
	I1209 05:45:26.980349 1429857 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34210 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/no-preload-842269/id_rsa Username:docker}
	I1209 05:45:27.069732 1429857 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 05:45:27.125899 1429857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 05:45:27.154169 1429857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1209 05:45:27.157146 1429857 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1209 05:45:27.157166 1429857 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1209 05:45:27.225908 1429857 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1209 05:45:27.225931 1429857 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1209 05:45:27.240153 1429857 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1209 05:45:27.240176 1429857 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1209 05:45:27.253621 1429857 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1209 05:45:27.253645 1429857 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1209 05:45:27.266747 1429857 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1209 05:45:27.266820 1429857 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1209 05:45:27.280090 1429857 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1209 05:45:27.280113 1429857 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1209 05:45:27.292756 1429857 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1209 05:45:27.292820 1429857 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1209 05:45:27.305913 1429857 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1209 05:45:27.305935 1429857 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1209 05:45:27.318675 1429857 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1209 05:45:27.318701 1429857 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1209 05:45:27.331338 1429857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1209 05:45:27.683021 1429857 node_ready.go:35] waiting up to 6m0s for node "no-preload-842269" to be "Ready" ...
	W1209 05:45:27.683361 1429857 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:27.683395 1429857 retry.go:31] will retry after 184.58375ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 05:45:27.683444 1429857 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:27.683451 1429857 retry.go:31] will retry after 269.389918ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 05:45:27.683630 1429857 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:27.683645 1429857 retry.go:31] will retry after 361.009314ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:27.869176 1429857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1209 05:45:27.925658 1429857 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:27.925689 1429857 retry.go:31] will retry after 219.894467ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:27.953869 1429857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1209 05:45:28.020255 1429857 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:28.020343 1429857 retry.go:31] will retry after 279.215289ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:28.045549 1429857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1209 05:45:28.108956 1429857 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:28.108989 1429857 retry.go:31] will retry after 273.063822ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:28.146313 1429857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1209 05:45:28.216595 1429857 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:28.216628 1429857 retry.go:31] will retry after 381.056559ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:28.300048 1429857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1209 05:45:28.357345 1429857 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:28.357379 1429857 retry.go:31] will retry after 809.396818ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:28.382541 1429857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1209 05:45:28.448575 1429857 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:28.448609 1429857 retry.go:31] will retry after 547.183213ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:28.597889 1429857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1209 05:45:28.654047 1429857 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:28.654084 1429857 retry.go:31] will retry after 1.262178547s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:28.996073 1429857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1209 05:45:29.058678 1429857 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:29.058721 1429857 retry.go:31] will retry after 492.162637ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:29.167844 1429857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1209 05:45:29.255905 1429857 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:29.255979 1429857 retry.go:31] will retry after 677.449885ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:29.551561 1429857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1209 05:45:29.613116 1429857 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:29.613148 1429857 retry.go:31] will retry after 949.934015ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 05:45:29.683816 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	I1209 05:45:29.917380 1429857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 05:45:29.933951 1429857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1209 05:45:30.056298 1429857 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:30.056338 1429857 retry.go:31] will retry after 692.239155ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 05:45:30.056406 1429857 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:30.056419 1429857 retry.go:31] will retry after 1.787501236s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:30.563380 1429857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1209 05:45:30.625844 1429857 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:30.625911 1429857 retry.go:31] will retry after 1.269031662s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:30.749550 1429857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1209 05:45:30.807105 1429857 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:30.807136 1429857 retry.go:31] will retry after 2.270752641s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 05:45:31.684175 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	I1209 05:45:31.844648 1429857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 05:45:31.895165 1429857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1209 05:45:31.914099 1429857 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:31.914132 1429857 retry.go:31] will retry after 1.693137355s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 05:45:31.975352 1429857 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:31.975398 1429857 retry.go:31] will retry after 3.456836552s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:33.078099 1429857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1209 05:45:33.136837 1429857 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:33.136870 1429857 retry.go:31] will retry after 2.044494816s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:33.607514 1429857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1209 05:45:33.665951 1429857 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:33.665984 1429857 retry.go:31] will retry after 3.185980177s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 05:45:33.684482 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	I1209 05:45:35.181952 1429857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1209 05:45:35.257647 1429857 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:35.257683 1429857 retry.go:31] will retry after 6.247119086s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:35.432935 1429857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1209 05:45:35.489710 1429857 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:35.489747 1429857 retry.go:31] will retry after 5.005761894s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 05:45:36.183633 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	I1209 05:45:36.853121 1429857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1209 05:45:36.910093 1429857 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:36.910124 1429857 retry.go:31] will retry after 2.260143685s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 05:45:38.684059 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	I1209 05:45:39.170535 1429857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1209 05:45:39.232801 1429857 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:39.232833 1429857 retry.go:31] will retry after 5.898281664s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:40.496123 1429857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1209 05:45:40.569501 1429857 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:40.569541 1429857 retry.go:31] will retry after 5.242247905s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 05:45:41.183647 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	I1209 05:45:41.505063 1429857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1209 05:45:41.569422 1429857 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:41.569455 1429857 retry.go:31] will retry after 4.503235869s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 05:45:43.683557 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	I1209 05:45:45.132421 1429857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1209 05:45:45.222154 1429857 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:45.222198 1429857 retry.go:31] will retry after 8.250619683s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 05:45:45.684059 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	I1209 05:45:45.812550 1429857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1209 05:45:45.881229 1429857 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:45.881261 1429857 retry.go:31] will retry after 8.251153137s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:46.073504 1429857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1209 05:45:46.133618 1429857 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:46.133649 1429857 retry.go:31] will retry after 8.692623616s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 05:45:47.684156 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:45:49.684420 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:45:52.184266 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	I1209 05:45:53.473746 1429857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1209 05:45:53.531667 1429857 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:53.531698 1429857 retry.go:31] will retry after 15.506930845s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:54.132979 1429857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1209 05:45:54.193191 1429857 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:54.193224 1429857 retry.go:31] will retry after 10.284746977s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 05:45:54.683975 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	I1209 05:45:54.827471 1429857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1209 05:45:54.889500 1429857 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:54.889532 1429857 retry.go:31] will retry after 18.446693624s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 05:45:56.684069 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:45:58.684566 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:46:01.183568 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:46:03.184493 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	I1209 05:46:04.478972 1429857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1209 05:46:04.538725 1429857 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:46:04.538770 1429857 retry.go:31] will retry after 23.738719196s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 05:46:05.684477 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:46:08.183831 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	I1209 05:46:09.038866 1429857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1209 05:46:09.093686 1429857 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:46:09.093718 1429857 retry.go:31] will retry after 26.248517502s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 05:46:10.184557 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:46:12.684037 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	I1209 05:46:13.337395 1429857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1209 05:46:13.395453 1429857 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:46:13.395482 1429857 retry.go:31] will retry after 20.604537862s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 05:46:14.684146 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:46:17.183632 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:46:19.184010 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:46:21.184436 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:46:23.684454 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:46:26.184391 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	I1209 05:46:28.277860 1429857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1209 05:46:28.338442 1429857 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:46:28.338477 1429857 retry.go:31] will retry after 19.859111094s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 05:46:28.684457 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:46:31.184269 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:46:33.683555 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	I1209 05:46:34.001016 1429857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1209 05:46:34.063987 1429857 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:46:34.064033 1429857 retry.go:31] will retry after 28.707309643s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:46:35.342890 1429857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1209 05:46:35.399451 1429857 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:46:35.399484 1429857 retry.go:31] will retry after 27.272034746s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 05:46:35.684281 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:46:38.184278 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:46:40.684368 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:46:42.684576 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:46:45.183976 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:46:47.683573 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	I1209 05:46:48.197871 1429857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1209 05:46:48.261674 1429857 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 05:46:48.261785 1429857 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1209 05:46:49.683828 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:46:51.684493 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:46:54.184206 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:46:56.683584 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:46:58.684546 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:47:01.184173 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	I1209 05:47:02.671757 1429857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1209 05:47:02.753978 1429857 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 05:47:02.754067 1429857 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1209 05:47:02.772232 1429857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1209 05:47:02.829567 1429857 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 05:47:02.829669 1429857 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1209 05:47:02.832581 1429857 out.go:179] * Enabled addons: 
	I1209 05:47:02.835625 1429857 addons.go:530] duration metric: took 1m36.002308157s for enable addons: enabled=[]
	W1209 05:47:03.184442 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:47:05.684309 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:47:07.684428 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:47:09.684532 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:47:12.184139 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:47:14.184410 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:47:16.684288 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:47:19.183449 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:47:21.184166 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:47:23.683505 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:47:25.684390 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:47:28.184325 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:47:30.184488 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:47:32.683874 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:47:34.684542 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:47:37.184070 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:47:39.684409 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:47:42.184768 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:47:44.684202 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:47:46.684646 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:47:49.184114 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:47:51.184388 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:47:53.683596 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:47:55.684345 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:47:58.184278 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:48:00.684415 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:48:03.184342 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:48:05.683500 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:48:07.684429 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:48:10.184405 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:48:12.684229 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:48:15.184267 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:48:17.684049 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:48:19.684520 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:48:22.183499 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:48:24.184521 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:48:26.684110 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:48:28.684278 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:48:31.184230 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:48:33.184335 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:48:35.184598 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:48:37.683518 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:48:40.183481 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:48:42.183843 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:48:44.184261 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:48:46.684428 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:48:49.184195 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:48:51.184658 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:48:53.683520 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:48:55.684601 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:48:58.184091 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	I1209 05:49:02.727137 1422398 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001512454s
	I1209 05:49:02.727164 1422398 kubeadm.go:319] 
	I1209 05:49:02.727221 1422398 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1209 05:49:02.727255 1422398 kubeadm.go:319] 	- The kubelet is not running
	I1209 05:49:02.727360 1422398 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1209 05:49:02.727366 1422398 kubeadm.go:319] 
	I1209 05:49:02.727470 1422398 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1209 05:49:02.727502 1422398 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1209 05:49:02.727533 1422398 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1209 05:49:02.727537 1422398 kubeadm.go:319] 
	I1209 05:49:02.737230 1422398 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1209 05:49:02.737653 1422398 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1209 05:49:02.737766 1422398 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1209 05:49:02.738004 1422398 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1209 05:49:02.738014 1422398 kubeadm.go:319] 
	I1209 05:49:02.738083 1422398 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1209 05:49:02.738133 1422398 kubeadm.go:403] duration metric: took 8m7.90723854s to StartCluster
	I1209 05:49:02.738172 1422398 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:49:02.738235 1422398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:49:02.766379 1422398 cri.go:89] found id: ""
	I1209 05:49:02.766408 1422398 logs.go:282] 0 containers: []
	W1209 05:49:02.766416 1422398 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:49:02.766423 1422398 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:49:02.766487 1422398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:49:02.790398 1422398 cri.go:89] found id: ""
	I1209 05:49:02.790423 1422398 logs.go:282] 0 containers: []
	W1209 05:49:02.790431 1422398 logs.go:284] No container was found matching "etcd"
	I1209 05:49:02.790437 1422398 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:49:02.790493 1422398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:49:02.814861 1422398 cri.go:89] found id: ""
	I1209 05:49:02.814885 1422398 logs.go:282] 0 containers: []
	W1209 05:49:02.814894 1422398 logs.go:284] No container was found matching "coredns"
	I1209 05:49:02.814900 1422398 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:49:02.814958 1422398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:49:02.838939 1422398 cri.go:89] found id: ""
	I1209 05:49:02.838964 1422398 logs.go:282] 0 containers: []
	W1209 05:49:02.838973 1422398 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:49:02.838979 1422398 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:49:02.839047 1422398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:49:02.863333 1422398 cri.go:89] found id: ""
	I1209 05:49:02.863398 1422398 logs.go:282] 0 containers: []
	W1209 05:49:02.863421 1422398 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:49:02.863440 1422398 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:49:02.863527 1422398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:49:02.887104 1422398 cri.go:89] found id: ""
	I1209 05:49:02.887134 1422398 logs.go:282] 0 containers: []
	W1209 05:49:02.887152 1422398 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:49:02.887159 1422398 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:49:02.887226 1422398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:49:02.911004 1422398 cri.go:89] found id: ""
	I1209 05:49:02.911031 1422398 logs.go:282] 0 containers: []
	W1209 05:49:02.911039 1422398 logs.go:284] No container was found matching "kindnet"
	I1209 05:49:02.911049 1422398 logs.go:123] Gathering logs for kubelet ...
	I1209 05:49:02.911061 1422398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:49:02.967158 1422398 logs.go:123] Gathering logs for dmesg ...
	I1209 05:49:02.967192 1422398 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:49:02.983481 1422398 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:49:02.983507 1422398 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:49:03.049617 1422398 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:49:03.041414    4809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:49:03.042038    4809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:49:03.043805    4809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:49:03.044324    4809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:49:03.045788    4809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:49:03.041414    4809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:49:03.042038    4809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:49:03.043805    4809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:49:03.044324    4809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:49:03.045788    4809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:49:03.049650 1422398 logs.go:123] Gathering logs for containerd ...
	I1209 05:49:03.049664 1422398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:49:03.089335 1422398 logs.go:123] Gathering logs for container status ...
	I1209 05:49:03.089371 1422398 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1209 05:49:03.115730 1422398 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001512454s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1209 05:49:03.115806 1422398 out.go:285] * 
	W1209 05:49:03.116006 1422398 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001512454s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1209 05:49:03.116054 1422398 out.go:285] * 
	W1209 05:49:03.118239 1422398 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 05:49:03.123846 1422398 out.go:203] 
	W1209 05:49:03.126762 1422398 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001512454s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1209 05:49:03.126805 1422398 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1209 05:49:03.126825 1422398 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1209 05:49:03.129920 1422398 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 09 05:40:53 newest-cni-262540 containerd[755]: time="2025-12-09T05:40:53.083724360Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 09 05:40:53 newest-cni-262540 containerd[755]: time="2025-12-09T05:40:53.083744421Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 09 05:40:53 newest-cni-262540 containerd[755]: time="2025-12-09T05:40:53.083783443Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 09 05:40:53 newest-cni-262540 containerd[755]: time="2025-12-09T05:40:53.083797490Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 09 05:40:53 newest-cni-262540 containerd[755]: time="2025-12-09T05:40:53.083807000Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 09 05:40:53 newest-cni-262540 containerd[755]: time="2025-12-09T05:40:53.083818889Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 09 05:40:53 newest-cni-262540 containerd[755]: time="2025-12-09T05:40:53.083828136Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 09 05:40:53 newest-cni-262540 containerd[755]: time="2025-12-09T05:40:53.083838802Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 09 05:40:53 newest-cni-262540 containerd[755]: time="2025-12-09T05:40:53.083861349Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 09 05:40:53 newest-cni-262540 containerd[755]: time="2025-12-09T05:40:53.083894169Z" level=info msg="Connect containerd service"
	Dec 09 05:40:53 newest-cni-262540 containerd[755]: time="2025-12-09T05:40:53.084216441Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 09 05:40:53 newest-cni-262540 containerd[755]: time="2025-12-09T05:40:53.084737846Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 09 05:40:53 newest-cni-262540 containerd[755]: time="2025-12-09T05:40:53.102982113Z" level=info msg="Start subscribing containerd event"
	Dec 09 05:40:53 newest-cni-262540 containerd[755]: time="2025-12-09T05:40:53.103063719Z" level=info msg="Start recovering state"
	Dec 09 05:40:53 newest-cni-262540 containerd[755]: time="2025-12-09T05:40:53.104591037Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 09 05:40:53 newest-cni-262540 containerd[755]: time="2025-12-09T05:40:53.104654519Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 09 05:40:53 newest-cni-262540 containerd[755]: time="2025-12-09T05:40:53.142169815Z" level=info msg="Start event monitor"
	Dec 09 05:40:53 newest-cni-262540 containerd[755]: time="2025-12-09T05:40:53.142215894Z" level=info msg="Start cni network conf syncer for default"
	Dec 09 05:40:53 newest-cni-262540 containerd[755]: time="2025-12-09T05:40:53.142225797Z" level=info msg="Start streaming server"
	Dec 09 05:40:53 newest-cni-262540 containerd[755]: time="2025-12-09T05:40:53.142235594Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 09 05:40:53 newest-cni-262540 containerd[755]: time="2025-12-09T05:40:53.142243799Z" level=info msg="runtime interface starting up..."
	Dec 09 05:40:53 newest-cni-262540 containerd[755]: time="2025-12-09T05:40:53.142250239Z" level=info msg="starting plugins..."
	Dec 09 05:40:53 newest-cni-262540 containerd[755]: time="2025-12-09T05:40:53.142262826Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 09 05:40:53 newest-cni-262540 containerd[755]: time="2025-12-09T05:40:53.142386818Z" level=info msg="containerd successfully booted in 0.079198s"
	Dec 09 05:40:53 newest-cni-262540 systemd[1]: Started containerd.service - containerd container runtime.
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:49:04.182459    4927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:49:04.183232    4927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:49:04.184855    4927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:49:04.185178    4927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:49:04.186710    4927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +25.904254] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:14] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:16] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:18] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:19] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:30] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:32] overlayfs: idmapped layers are currently not supported
	[ +28.114653] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:33] overlayfs: idmapped layers are currently not supported
	[ +23.720849] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:34] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:35] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:36] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:37] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:38] overlayfs: idmapped layers are currently not supported
	[ +23.656275] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:39] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:57] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:58] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:00] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:02] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:03] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:05] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:06] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 9 05:31] overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	
	
	==> kernel <==
	 05:49:04 up  8:31,  0 user,  load average: 0.19, 0.73, 1.41
	Linux newest-cni-262540 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 09 05:49:01 newest-cni-262540 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 09 05:49:01 newest-cni-262540 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 319.
	Dec 09 05:49:01 newest-cni-262540 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 05:49:01 newest-cni-262540 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 05:49:01 newest-cni-262540 kubelet[4733]: E1209 05:49:01.979469    4733 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 09 05:49:01 newest-cni-262540 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 09 05:49:01 newest-cni-262540 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 09 05:49:02 newest-cni-262540 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 09 05:49:02 newest-cni-262540 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 05:49:02 newest-cni-262540 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 05:49:02 newest-cni-262540 kubelet[4738]: E1209 05:49:02.726100    4738 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 09 05:49:02 newest-cni-262540 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 09 05:49:02 newest-cni-262540 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 09 05:49:03 newest-cni-262540 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 09 05:49:03 newest-cni-262540 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 05:49:03 newest-cni-262540 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 05:49:03 newest-cni-262540 kubelet[4838]: E1209 05:49:03.497747    4838 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 09 05:49:03 newest-cni-262540 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 09 05:49:03 newest-cni-262540 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 09 05:49:04 newest-cni-262540 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Dec 09 05:49:04 newest-cni-262540 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 05:49:04 newest-cni-262540 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 05:49:04 newest-cni-262540 kubelet[4931]: E1209 05:49:04.234817    4931 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 09 05:49:04 newest-cni-262540 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 09 05:49:04 newest-cni-262540 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-262540 -n newest-cni-262540
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-262540 -n newest-cni-262540: exit status 6 (345.360043ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1209 05:49:04.692655 1434879 status.go:458] kubeconfig endpoint: get endpoint: "newest-cni-262540" does not appear in /home/jenkins/minikube-integration/22081-1142328/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "newest-cni-262540" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (503.74s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (3.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-842269 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) Non-zero exit: kubectl --context no-preload-842269 create -f testdata/busybox.yaml: exit status 1 (53.477062ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-842269" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:194: kubectl --context no-preload-842269 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-842269
helpers_test.go:243: (dbg) docker inspect no-preload-842269:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9789b34a5453b154c4ceca4f0038c1d7948d7f0f72f334a114a8452d803ad415",
	        "Created": "2025-12-09T05:35:10.617601088Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1404960,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-09T05:35:10.694361506Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:e4eb91ed18a24161fce60c7cdd660144ecd5b8c5029dc2dea2c5e423c2f48ce4",
	        "ResolvConfPath": "/var/lib/docker/containers/9789b34a5453b154c4ceca4f0038c1d7948d7f0f72f334a114a8452d803ad415/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9789b34a5453b154c4ceca4f0038c1d7948d7f0f72f334a114a8452d803ad415/hostname",
	        "HostsPath": "/var/lib/docker/containers/9789b34a5453b154c4ceca4f0038c1d7948d7f0f72f334a114a8452d803ad415/hosts",
	        "LogPath": "/var/lib/docker/containers/9789b34a5453b154c4ceca4f0038c1d7948d7f0f72f334a114a8452d803ad415/9789b34a5453b154c4ceca4f0038c1d7948d7f0f72f334a114a8452d803ad415-json.log",
	        "Name": "/no-preload-842269",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-842269:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-842269",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "9789b34a5453b154c4ceca4f0038c1d7948d7f0f72f334a114a8452d803ad415",
	                "LowerDir": "/var/lib/docker/overlay2/658f95b1f330fcb0d4774e9611c50218d409d0a889583e0050325b4fe479e9f7-init/diff:/var/lib/docker/overlay2/c44bb57aa59cc265266f37f2bb6e7ec0e7d641c3b4aeaa57e6d23deec6f0d1d4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/658f95b1f330fcb0d4774e9611c50218d409d0a889583e0050325b4fe479e9f7/merged",
	                "UpperDir": "/var/lib/docker/overlay2/658f95b1f330fcb0d4774e9611c50218d409d0a889583e0050325b4fe479e9f7/diff",
	                "WorkDir": "/var/lib/docker/overlay2/658f95b1f330fcb0d4774e9611c50218d409d0a889583e0050325b4fe479e9f7/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-842269",
	                "Source": "/var/lib/docker/volumes/no-preload-842269/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-842269",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-842269",
	                "name.minikube.sigs.k8s.io": "no-preload-842269",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c8d638bf0ac3f8de516cba00d80a3b149af62367900ced69943b89e3e7924db8",
	            "SandboxKey": "/var/run/docker/netns/c8d638bf0ac3",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34185"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34186"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34189"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34187"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34188"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-842269": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "4e:5c:05:82:25:f0",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6461bd7226e5723487f325bf78054dc63f1dafa2831abe7b44a8cc288dfa4456",
	                    "EndpointID": "5bccd85f7c02ee9bc4903397b85755d423fd035b5d120846d74ca8550b415301",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-842269",
	                        "9789b34a5453"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-842269 -n no-preload-842269
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-842269 -n no-preload-842269: exit status 6 (472.267519ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1209 05:43:45.503969 1427169 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-842269" does not appear in /home/jenkins/minikube-integration/22081-1142328/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-842269 logs -n 25
helpers_test.go:260: TestStartStop/group/no-preload/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─
────────────────────┐
	│ COMMAND │                                                                                                                            ARGS                                                                                                                            │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─
────────────────────┤
	│ start   │ -p embed-certs-432108 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-432108           │ jenkins │ v1.37.0 │ 09 Dec 25 05:34 UTC │ 09 Dec 25 05:36 UTC │
	│ start   │ -p cert-expiration-074045 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd                                                                                                                                            │ cert-expiration-074045       │ jenkins │ v1.37.0 │ 09 Dec 25 05:34 UTC │ 09 Dec 25 05:35 UTC │
	│ delete  │ -p cert-expiration-074045                                                                                                                                                                                                                                  │ cert-expiration-074045       │ jenkins │ v1.37.0 │ 09 Dec 25 05:35 UTC │ 09 Dec 25 05:35 UTC │
	│ delete  │ -p disable-driver-mounts-094940                                                                                                                                                                                                                            │ disable-driver-mounts-094940 │ jenkins │ v1.37.0 │ 09 Dec 25 05:35 UTC │ 09 Dec 25 05:35 UTC │
	│ start   │ -p no-preload-842269 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-842269            │ jenkins │ v1.37.0 │ 09 Dec 25 05:35 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-432108 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                   │ embed-certs-432108           │ jenkins │ v1.37.0 │ 09 Dec 25 05:36 UTC │ 09 Dec 25 05:36 UTC │
	│ stop    │ -p embed-certs-432108 --alsologtostderr -v=3                                                                                                                                                                                                               │ embed-certs-432108           │ jenkins │ v1.37.0 │ 09 Dec 25 05:36 UTC │ 09 Dec 25 05:36 UTC │
	│ addons  │ enable dashboard -p embed-certs-432108 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                              │ embed-certs-432108           │ jenkins │ v1.37.0 │ 09 Dec 25 05:36 UTC │ 09 Dec 25 05:36 UTC │
	│ start   │ -p embed-certs-432108 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-432108           │ jenkins │ v1.37.0 │ 09 Dec 25 05:36 UTC │ 09 Dec 25 05:37 UTC │
	│ image   │ embed-certs-432108 image list --format=json                                                                                                                                                                                                                │ embed-certs-432108           │ jenkins │ v1.37.0 │ 09 Dec 25 05:37 UTC │ 09 Dec 25 05:37 UTC │
	│ pause   │ -p embed-certs-432108 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-432108           │ jenkins │ v1.37.0 │ 09 Dec 25 05:37 UTC │ 09 Dec 25 05:37 UTC │
	│ unpause │ -p embed-certs-432108 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-432108           │ jenkins │ v1.37.0 │ 09 Dec 25 05:37 UTC │ 09 Dec 25 05:37 UTC │
	│ delete  │ -p embed-certs-432108                                                                                                                                                                                                                                      │ embed-certs-432108           │ jenkins │ v1.37.0 │ 09 Dec 25 05:37 UTC │ 09 Dec 25 05:37 UTC │
	│ delete  │ -p embed-certs-432108                                                                                                                                                                                                                                      │ embed-certs-432108           │ jenkins │ v1.37.0 │ 09 Dec 25 05:37 UTC │ 09 Dec 25 05:37 UTC │
	│ start   │ -p default-k8s-diff-port-564611 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-564611 │ jenkins │ v1.37.0 │ 09 Dec 25 05:37 UTC │ 09 Dec 25 05:39 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-564611 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                         │ default-k8s-diff-port-564611 │ jenkins │ v1.37.0 │ 09 Dec 25 05:39 UTC │ 09 Dec 25 05:39 UTC │
	│ stop    │ -p default-k8s-diff-port-564611 --alsologtostderr -v=3                                                                                                                                                                                                     │ default-k8s-diff-port-564611 │ jenkins │ v1.37.0 │ 09 Dec 25 05:39 UTC │ 09 Dec 25 05:39 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-564611 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ default-k8s-diff-port-564611 │ jenkins │ v1.37.0 │ 09 Dec 25 05:39 UTC │ 09 Dec 25 05:39 UTC │
	│ start   │ -p default-k8s-diff-port-564611 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-564611 │ jenkins │ v1.37.0 │ 09 Dec 25 05:39 UTC │ 09 Dec 25 05:40 UTC │
	│ image   │ default-k8s-diff-port-564611 image list --format=json                                                                                                                                                                                                      │ default-k8s-diff-port-564611 │ jenkins │ v1.37.0 │ 09 Dec 25 05:40 UTC │ 09 Dec 25 05:40 UTC │
	│ pause   │ -p default-k8s-diff-port-564611 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-564611 │ jenkins │ v1.37.0 │ 09 Dec 25 05:40 UTC │ 09 Dec 25 05:40 UTC │
	│ unpause │ -p default-k8s-diff-port-564611 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-564611 │ jenkins │ v1.37.0 │ 09 Dec 25 05:40 UTC │ 09 Dec 25 05:40 UTC │
	│ delete  │ -p default-k8s-diff-port-564611                                                                                                                                                                                                                            │ default-k8s-diff-port-564611 │ jenkins │ v1.37.0 │ 09 Dec 25 05:40 UTC │ 09 Dec 25 05:40 UTC │
	│ delete  │ -p default-k8s-diff-port-564611                                                                                                                                                                                                                            │ default-k8s-diff-port-564611 │ jenkins │ v1.37.0 │ 09 Dec 25 05:40 UTC │ 09 Dec 25 05:40 UTC │
	│ start   │ -p newest-cni-262540 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-262540            │ jenkins │ v1.37.0 │ 09 Dec 25 05:40 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─
────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/09 05:40:41
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 05:40:41.014166 1422398 out.go:360] Setting OutFile to fd 1 ...
	I1209 05:40:41.014346 1422398 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 05:40:41.014376 1422398 out.go:374] Setting ErrFile to fd 2...
	I1209 05:40:41.014403 1422398 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 05:40:41.014777 1422398 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-1142328/.minikube/bin
	I1209 05:40:41.015346 1422398 out.go:368] Setting JSON to false
	I1209 05:40:41.016651 1422398 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":30164,"bootTime":1765228677,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1209 05:40:41.016752 1422398 start.go:143] virtualization:  
	I1209 05:40:41.020737 1422398 out.go:179] * [newest-cni-262540] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1209 05:40:41.025100 1422398 out.go:179]   - MINIKUBE_LOCATION=22081
	I1209 05:40:41.025177 1422398 notify.go:221] Checking for updates...
	I1209 05:40:41.031377 1422398 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 05:40:41.034527 1422398 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22081-1142328/kubeconfig
	I1209 05:40:41.037660 1422398 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-1142328/.minikube
	I1209 05:40:41.040646 1422398 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1209 05:40:41.043555 1422398 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 05:40:41.047098 1422398 config.go:182] Loaded profile config "no-preload-842269": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1209 05:40:41.047203 1422398 driver.go:422] Setting default libvirt URI to qemu:///system
	I1209 05:40:41.082759 1422398 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1209 05:40:41.082877 1422398 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 05:40:41.141221 1422398 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-09 05:40:41.131267754 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1209 05:40:41.141331 1422398 docker.go:319] overlay module found
	I1209 05:40:41.144673 1422398 out.go:179] * Using the docker driver based on user configuration
	I1209 05:40:41.147595 1422398 start.go:309] selected driver: docker
	I1209 05:40:41.147618 1422398 start.go:927] validating driver "docker" against <nil>
	I1209 05:40:41.147633 1422398 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 05:40:41.148480 1422398 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 05:40:41.205051 1422398 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-09 05:40:41.195808894 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1209 05:40:41.205216 1422398 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1209 05:40:41.205249 1422398 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1209 05:40:41.205488 1422398 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1209 05:40:41.208233 1422398 out.go:179] * Using Docker driver with root privileges
	I1209 05:40:41.211172 1422398 cni.go:84] Creating CNI manager for ""
	I1209 05:40:41.211250 1422398 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1209 05:40:41.211263 1422398 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1209 05:40:41.211347 1422398 start.go:353] cluster config:
	{Name:newest-cni-262540 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-262540 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 05:40:41.214410 1422398 out.go:179] * Starting "newest-cni-262540" primary control-plane node in "newest-cni-262540" cluster
	I1209 05:40:41.217388 1422398 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1209 05:40:41.220416 1422398 out.go:179] * Pulling base image v0.0.48-1765184860-22066 ...
	I1209 05:40:41.223240 1422398 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1209 05:40:41.223288 1422398 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1209 05:40:41.223310 1422398 cache.go:65] Caching tarball of preloaded images
	I1209 05:40:41.223322 1422398 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local docker daemon
	I1209 05:40:41.223405 1422398 preload.go:238] Found /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1209 05:40:41.223416 1422398 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1209 05:40:41.223520 1422398 profile.go:143] Saving config to /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/config.json ...
	I1209 05:40:41.223546 1422398 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/config.json: {Name:mk3f2f0447b25b9c02ca47937d45ed297c23b284 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 05:40:41.242533 1422398 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local docker daemon, skipping pull
	I1209 05:40:41.242556 1422398 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c exists in daemon, skipping load
	I1209 05:40:41.242574 1422398 cache.go:243] Successfully downloaded all kic artifacts
	I1209 05:40:41.242607 1422398 start.go:360] acquireMachinesLock for newest-cni-262540: {Name:mk272d84ff1bc8c8949f2f0b1f608a7519899d10 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 05:40:41.242722 1422398 start.go:364] duration metric: took 94.012µs to acquireMachinesLock for "newest-cni-262540"
	I1209 05:40:41.242752 1422398 start.go:93] Provisioning new machine with config: &{Name:newest-cni-262540 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-262540 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1209 05:40:41.242832 1422398 start.go:125] createHost starting for "" (driver="docker")
	I1209 05:40:41.246278 1422398 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1209 05:40:41.246513 1422398 start.go:159] libmachine.API.Create for "newest-cni-262540" (driver="docker")
	I1209 05:40:41.246549 1422398 client.go:173] LocalClient.Create starting
	I1209 05:40:41.246618 1422398 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem
	I1209 05:40:41.246653 1422398 main.go:143] libmachine: Decoding PEM data...
	I1209 05:40:41.246672 1422398 main.go:143] libmachine: Parsing certificate...
	I1209 05:40:41.246730 1422398 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/cert.pem
	I1209 05:40:41.246753 1422398 main.go:143] libmachine: Decoding PEM data...
	I1209 05:40:41.246765 1422398 main.go:143] libmachine: Parsing certificate...
	I1209 05:40:41.247138 1422398 cli_runner.go:164] Run: docker network inspect newest-cni-262540 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1209 05:40:41.262988 1422398 cli_runner.go:211] docker network inspect newest-cni-262540 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1209 05:40:41.263073 1422398 network_create.go:284] running [docker network inspect newest-cni-262540] to gather additional debugging logs...
	I1209 05:40:41.263095 1422398 cli_runner.go:164] Run: docker network inspect newest-cni-262540
	W1209 05:40:41.279120 1422398 cli_runner.go:211] docker network inspect newest-cni-262540 returned with exit code 1
	I1209 05:40:41.279154 1422398 network_create.go:287] error running [docker network inspect newest-cni-262540]: docker network inspect newest-cni-262540: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-262540 not found
	I1209 05:40:41.279168 1422398 network_create.go:289] output of [docker network inspect newest-cni-262540]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-262540 not found
	
	** /stderr **
	I1209 05:40:41.279286 1422398 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1209 05:40:41.295748 1422398 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-7a15eec16b1a IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:8a:b7:58:bc:12:6c} reservation:<nil>}
	I1209 05:40:41.296192 1422398 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-fcb9e6b38e8e IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:56:c3:7a:b4:06:4b} reservation:<nil>}
	I1209 05:40:41.296445 1422398 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-8c1346c67d6b IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:82:10:14:75:55:fb} reservation:<nil>}
	I1209 05:40:41.296875 1422398 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019e80f0}
	I1209 05:40:41.296895 1422398 network_create.go:124] attempt to create docker network newest-cni-262540 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1209 05:40:41.296949 1422398 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-262540 newest-cni-262540
	I1209 05:40:41.356493 1422398 network_create.go:108] docker network newest-cni-262540 192.168.76.0/24 created
	I1209 05:40:41.356525 1422398 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-262540" container
	I1209 05:40:41.356609 1422398 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1209 05:40:41.372493 1422398 cli_runner.go:164] Run: docker volume create newest-cni-262540 --label name.minikube.sigs.k8s.io=newest-cni-262540 --label created_by.minikube.sigs.k8s.io=true
	I1209 05:40:41.390479 1422398 oci.go:103] Successfully created a docker volume newest-cni-262540
	I1209 05:40:41.390571 1422398 cli_runner.go:164] Run: docker run --rm --name newest-cni-262540-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-262540 --entrypoint /usr/bin/test -v newest-cni-262540:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c -d /var/lib
	I1209 05:40:41.957365 1422398 oci.go:107] Successfully prepared a docker volume newest-cni-262540
	I1209 05:40:41.957440 1422398 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1209 05:40:41.957454 1422398 kic.go:194] Starting extracting preloaded images to volume ...
	I1209 05:40:41.957523 1422398 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-262540:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c -I lz4 -xf /preloaded.tar -C /extractDir
	I1209 05:40:46.577478 1422398 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-262540:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c -I lz4 -xf /preloaded.tar -C /extractDir: (4.619919939s)
	I1209 05:40:46.577511 1422398 kic.go:203] duration metric: took 4.620053703s to extract preloaded images to volume ...
	W1209 05:40:46.577655 1422398 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1209 05:40:46.577765 1422398 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1209 05:40:46.641962 1422398 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-262540 --name newest-cni-262540 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-262540 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-262540 --network newest-cni-262540 --ip 192.168.76.2 --volume newest-cni-262540:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c
	I1209 05:40:46.963179 1422398 cli_runner.go:164] Run: docker container inspect newest-cni-262540 --format={{.State.Running}}
	I1209 05:40:46.990367 1422398 cli_runner.go:164] Run: docker container inspect newest-cni-262540 --format={{.State.Status}}
	I1209 05:40:47.023042 1422398 cli_runner.go:164] Run: docker exec newest-cni-262540 stat /var/lib/dpkg/alternatives/iptables
	I1209 05:40:47.074649 1422398 oci.go:144] the created container "newest-cni-262540" has a running status.
	I1209 05:40:47.074676 1422398 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22081-1142328/.minikube/machines/newest-cni-262540/id_rsa...
	I1209 05:40:47.692225 1422398 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22081-1142328/.minikube/machines/newest-cni-262540/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1209 05:40:47.718517 1422398 cli_runner.go:164] Run: docker container inspect newest-cni-262540 --format={{.State.Status}}
	I1209 05:40:47.740875 1422398 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1209 05:40:47.740894 1422398 kic_runner.go:114] Args: [docker exec --privileged newest-cni-262540 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1209 05:40:47.780644 1422398 cli_runner.go:164] Run: docker container inspect newest-cni-262540 --format={{.State.Status}}
	I1209 05:40:47.797898 1422398 machine.go:94] provisionDockerMachine start ...
	I1209 05:40:47.798001 1422398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-262540
	I1209 05:40:47.813940 1422398 main.go:143] libmachine: Using SSH client type: native
	I1209 05:40:47.814280 1422398 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 34205 <nil> <nil>}
	I1209 05:40:47.814295 1422398 main.go:143] libmachine: About to run SSH command:
	hostname
	I1209 05:40:47.814927 1422398 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1209 05:40:50.967418 1422398 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-262540
	
	I1209 05:40:50.967444 1422398 ubuntu.go:182] provisioning hostname "newest-cni-262540"
	I1209 05:40:50.967507 1422398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-262540
	I1209 05:40:50.983898 1422398 main.go:143] libmachine: Using SSH client type: native
	I1209 05:40:50.984244 1422398 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 34205 <nil> <nil>}
	I1209 05:40:50.984261 1422398 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-262540 && echo "newest-cni-262540" | sudo tee /etc/hostname
	I1209 05:40:51.158163 1422398 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-262540
	
	I1209 05:40:51.158329 1422398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-262540
	I1209 05:40:51.176198 1422398 main.go:143] libmachine: Using SSH client type: native
	I1209 05:40:51.176519 1422398 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 34205 <nil> <nil>}
	I1209 05:40:51.176535 1422398 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-262540' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-262540/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-262540' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 05:40:51.328246 1422398 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1209 05:40:51.328276 1422398 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22081-1142328/.minikube CaCertPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22081-1142328/.minikube}
	I1209 05:40:51.328348 1422398 ubuntu.go:190] setting up certificates
	I1209 05:40:51.328357 1422398 provision.go:84] configureAuth start
	I1209 05:40:51.328443 1422398 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-262540
	I1209 05:40:51.345620 1422398 provision.go:143] copyHostCerts
	I1209 05:40:51.345692 1422398 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-1142328/.minikube/key.pem, removing ...
	I1209 05:40:51.345702 1422398 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-1142328/.minikube/key.pem
	I1209 05:40:51.345782 1422398 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22081-1142328/.minikube/key.pem (1675 bytes)
	I1209 05:40:51.345892 1422398 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.pem, removing ...
	I1209 05:40:51.345903 1422398 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.pem
	I1209 05:40:51.345937 1422398 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.pem (1078 bytes)
	I1209 05:40:51.345995 1422398 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-1142328/.minikube/cert.pem, removing ...
	I1209 05:40:51.346004 1422398 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-1142328/.minikube/cert.pem
	I1209 05:40:51.346028 1422398 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22081-1142328/.minikube/cert.pem (1123 bytes)
	I1209 05:40:51.346078 1422398 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca-key.pem org=jenkins.newest-cni-262540 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-262540]
	I1209 05:40:51.459612 1422398 provision.go:177] copyRemoteCerts
	I1209 05:40:51.459736 1422398 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 05:40:51.459804 1422398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-262540
	I1209 05:40:51.477068 1422398 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34205 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/newest-cni-262540/id_rsa Username:docker}
	I1209 05:40:51.583430 1422398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1209 05:40:51.599930 1422398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1209 05:40:51.616188 1422398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1209 05:40:51.632654 1422398 provision.go:87] duration metric: took 304.27698ms to configureAuth
	I1209 05:40:51.632690 1422398 ubuntu.go:206] setting minikube options for container-runtime
	I1209 05:40:51.632889 1422398 config.go:182] Loaded profile config "newest-cni-262540": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1209 05:40:51.632903 1422398 machine.go:97] duration metric: took 3.834981835s to provisionDockerMachine
	I1209 05:40:51.632910 1422398 client.go:176] duration metric: took 10.386351456s to LocalClient.Create
	I1209 05:40:51.632935 1422398 start.go:167] duration metric: took 10.386419491s to libmachine.API.Create "newest-cni-262540"
	I1209 05:40:51.632946 1422398 start.go:293] postStartSetup for "newest-cni-262540" (driver="docker")
	I1209 05:40:51.632957 1422398 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 05:40:51.633024 1422398 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 05:40:51.633069 1422398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-262540
	I1209 05:40:51.648788 1422398 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34205 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/newest-cni-262540/id_rsa Username:docker}
	I1209 05:40:51.751770 1422398 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 05:40:51.754890 1422398 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1209 05:40:51.754915 1422398 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1209 05:40:51.754931 1422398 filesync.go:126] Scanning /home/jenkins/minikube-integration/22081-1142328/.minikube/addons for local assets ...
	I1209 05:40:51.754996 1422398 filesync.go:126] Scanning /home/jenkins/minikube-integration/22081-1142328/.minikube/files for local assets ...
	I1209 05:40:51.755088 1422398 filesync.go:149] local asset: /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/ssl/certs/11442312.pem -> 11442312.pem in /etc/ssl/certs
	I1209 05:40:51.755194 1422398 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 05:40:51.762311 1422398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/ssl/certs/11442312.pem --> /etc/ssl/certs/11442312.pem (1708 bytes)
	I1209 05:40:51.778994 1422398 start.go:296] duration metric: took 146.033857ms for postStartSetup
	I1209 05:40:51.779431 1422398 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-262540
	I1209 05:40:51.798065 1422398 profile.go:143] Saving config to /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/config.json ...
	I1209 05:40:51.798353 1422398 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1209 05:40:51.798402 1422398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-262540
	I1209 05:40:51.814583 1422398 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34205 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/newest-cni-262540/id_rsa Username:docker}
	I1209 05:40:51.917312 1422398 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1209 05:40:51.922304 1422398 start.go:128] duration metric: took 10.679457533s to createHost
	I1209 05:40:51.922328 1422398 start.go:83] releasing machines lock for "newest-cni-262540", held for 10.67959362s
	I1209 05:40:51.922409 1422398 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-262540
	I1209 05:40:51.939569 1422398 ssh_runner.go:195] Run: cat /version.json
	I1209 05:40:51.939636 1422398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-262540
	I1209 05:40:51.939638 1422398 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 05:40:51.939698 1422398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-262540
	I1209 05:40:51.960875 1422398 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34205 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/newest-cni-262540/id_rsa Username:docker}
	I1209 05:40:51.963453 1422398 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34205 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/newest-cni-262540/id_rsa Username:docker}
	I1209 05:40:52.063736 1422398 ssh_runner.go:195] Run: systemctl --version
	I1209 05:40:52.156351 1422398 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 05:40:52.160600 1422398 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 05:40:52.160672 1422398 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 05:40:52.187388 1422398 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1209 05:40:52.187415 1422398 start.go:496] detecting cgroup driver to use...
	I1209 05:40:52.187446 1422398 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1209 05:40:52.187504 1422398 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1209 05:40:52.203080 1422398 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1209 05:40:52.215843 1422398 docker.go:218] disabling cri-docker service (if available) ...
	I1209 05:40:52.215908 1422398 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 05:40:52.232148 1422398 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 05:40:52.250032 1422398 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 05:40:52.358548 1422398 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 05:40:52.481614 1422398 docker.go:234] disabling docker service ...
	I1209 05:40:52.481725 1422398 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 05:40:52.502779 1422398 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 05:40:52.515525 1422398 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 05:40:52.630357 1422398 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 05:40:52.754667 1422398 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 05:40:52.769286 1422398 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 05:40:52.785364 1422398 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1209 05:40:52.794252 1422398 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1209 05:40:52.803528 1422398 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1209 05:40:52.803619 1422398 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1209 05:40:52.812544 1422398 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1209 05:40:52.820837 1422398 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1209 05:40:52.829672 1422398 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1209 05:40:52.838554 1422398 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 05:40:52.846308 1422398 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1209 05:40:52.854529 1422398 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1209 05:40:52.863150 1422398 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1209 05:40:52.871579 1422398 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 05:40:52.878758 1422398 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 05:40:52.886006 1422398 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 05:40:53.012110 1422398 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1209 05:40:53.145258 1422398 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1209 05:40:53.145356 1422398 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1209 05:40:53.148998 1422398 start.go:564] Will wait 60s for crictl version
	I1209 05:40:53.149063 1422398 ssh_runner.go:195] Run: which crictl
	I1209 05:40:53.152446 1422398 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1209 05:40:53.177386 1422398 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1209 05:40:53.177452 1422398 ssh_runner.go:195] Run: containerd --version
	I1209 05:40:53.199507 1422398 ssh_runner.go:195] Run: containerd --version
	I1209 05:40:53.225320 1422398 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1209 05:40:53.228305 1422398 cli_runner.go:164] Run: docker network inspect newest-cni-262540 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1209 05:40:53.243962 1422398 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1209 05:40:53.247757 1422398 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 05:40:53.260215 1422398 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1209 05:40:53.262990 1422398 kubeadm.go:884] updating cluster {Name:newest-cni-262540 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-262540 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 05:40:53.263149 1422398 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1209 05:40:53.263229 1422398 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 05:40:53.289432 1422398 containerd.go:627] all images are preloaded for containerd runtime.
	I1209 05:40:53.289455 1422398 containerd.go:534] Images already preloaded, skipping extraction
	I1209 05:40:53.289546 1422398 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 05:40:53.312520 1422398 containerd.go:627] all images are preloaded for containerd runtime.
	I1209 05:40:53.312544 1422398 cache_images.go:86] Images are preloaded, skipping loading
	I1209 05:40:53.312552 1422398 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1209 05:40:53.312646 1422398 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-262540 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-262540 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 05:40:53.312713 1422398 ssh_runner.go:195] Run: sudo crictl info
	I1209 05:40:53.337527 1422398 cni.go:84] Creating CNI manager for ""
	I1209 05:40:53.337552 1422398 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1209 05:40:53.337571 1422398 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1209 05:40:53.337595 1422398 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-262540 NodeName:newest-cni-262540 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1209 05:40:53.337729 1422398 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-262540"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 05:40:53.337802 1422398 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1209 05:40:53.345447 1422398 binaries.go:51] Found k8s binaries, skipping transfer
	I1209 05:40:53.345517 1422398 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1209 05:40:53.352930 1422398 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1209 05:40:53.365409 1422398 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1209 05:40:53.377954 1422398 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1209 05:40:53.391187 1422398 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1209 05:40:53.394878 1422398 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 05:40:53.404484 1422398 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 05:40:53.509615 1422398 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 05:40:53.532992 1422398 certs.go:69] Setting up /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540 for IP: 192.168.76.2
	I1209 05:40:53.533014 1422398 certs.go:195] generating shared ca certs ...
	I1209 05:40:53.533065 1422398 certs.go:227] acquiring lock for ca certs: {Name:mk15788702f8c4e23b5aeab3f44961d296fab259 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 05:40:53.533239 1422398 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.key
	I1209 05:40:53.533305 1422398 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/proxy-client-ca.key
	I1209 05:40:53.533322 1422398 certs.go:257] generating profile certs ...
	I1209 05:40:53.533397 1422398 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/client.key
	I1209 05:40:53.533414 1422398 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/client.crt with IP's: []
	I1209 05:40:53.604706 1422398 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/client.crt ...
	I1209 05:40:53.604742 1422398 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/client.crt: {Name:mk908e1c63967383d20a56065c79b4bc0877c829 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 05:40:53.604954 1422398 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/client.key ...
	I1209 05:40:53.604968 1422398 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/client.key: {Name:mk0782d8c9cde6107bc905e7c1ffdb2b8a8e707c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 05:40:53.605064 1422398 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/apiserver.key.0ed49b31
	I1209 05:40:53.605085 1422398 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/apiserver.crt.0ed49b31 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1209 05:40:53.850901 1422398 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/apiserver.crt.0ed49b31 ...
	I1209 05:40:53.850943 1422398 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/apiserver.crt.0ed49b31: {Name:mkd1e6249eaef6a320629a45c3aa63c6b2fe9252 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 05:40:53.851131 1422398 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/apiserver.key.0ed49b31 ...
	I1209 05:40:53.851147 1422398 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/apiserver.key.0ed49b31: {Name:mk9df2970f8e62123fc8a73f846dec85a46dbe82 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 05:40:53.851239 1422398 certs.go:382] copying /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/apiserver.crt.0ed49b31 -> /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/apiserver.crt
	I1209 05:40:53.851366 1422398 certs.go:386] copying /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/apiserver.key.0ed49b31 -> /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/apiserver.key
	I1209 05:40:53.851432 1422398 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/proxy-client.key
	I1209 05:40:53.851456 1422398 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/proxy-client.crt with IP's: []
	I1209 05:40:54.332232 1422398 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/proxy-client.crt ...
	I1209 05:40:54.332268 1422398 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/proxy-client.crt: {Name:mk86c5c1261e1f4a7a13e3996ae202e7dfe017ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 05:40:54.332465 1422398 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/proxy-client.key ...
	I1209 05:40:54.332479 1422398 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/proxy-client.key: {Name:mk2b143aa140867219200e00888917dfd6928724 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 05:40:54.332672 1422398 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/1144231.pem (1338 bytes)
	W1209 05:40:54.332718 1422398 certs.go:480] ignoring /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/1144231_empty.pem, impossibly tiny 0 bytes
	I1209 05:40:54.332732 1422398 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca-key.pem (1679 bytes)
	I1209 05:40:54.332759 1422398 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem (1078 bytes)
	I1209 05:40:54.332787 1422398 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/cert.pem (1123 bytes)
	I1209 05:40:54.332816 1422398 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/key.pem (1675 bytes)
	I1209 05:40:54.332865 1422398 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/ssl/certs/11442312.pem (1708 bytes)
	I1209 05:40:54.333451 1422398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 05:40:54.351622 1422398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1209 05:40:54.369353 1422398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 05:40:54.386962 1422398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1209 05:40:54.405322 1422398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1209 05:40:54.422647 1422398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1209 05:40:54.483231 1422398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 05:40:54.515176 1422398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1209 05:40:54.533753 1422398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/ssl/certs/11442312.pem --> /usr/share/ca-certificates/11442312.pem (1708 bytes)
	I1209 05:40:54.552730 1422398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 05:40:54.570021 1422398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/1144231.pem --> /usr/share/ca-certificates/1144231.pem (1338 bytes)
	I1209 05:40:54.587455 1422398 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 05:40:54.600371 1422398 ssh_runner.go:195] Run: openssl version
	I1209 05:40:54.606642 1422398 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/11442312.pem
	I1209 05:40:54.613904 1422398 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/11442312.pem /etc/ssl/certs/11442312.pem
	I1209 05:40:54.621395 1422398 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11442312.pem
	I1209 05:40:54.624932 1422398 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 04:18 /usr/share/ca-certificates/11442312.pem
	I1209 05:40:54.625005 1422398 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11442312.pem
	I1209 05:40:54.665847 1422398 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1209 05:40:54.673355 1422398 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/11442312.pem /etc/ssl/certs/3ec20f2e.0
	I1209 05:40:54.680386 1422398 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1209 05:40:54.687518 1422398 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1209 05:40:54.694760 1422398 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 05:40:54.698200 1422398 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 04:09 /usr/share/ca-certificates/minikubeCA.pem
	I1209 05:40:54.698275 1422398 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 05:40:54.739105 1422398 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1209 05:40:54.746468 1422398 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1209 05:40:54.753754 1422398 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1144231.pem
	I1209 05:40:54.761267 1422398 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1144231.pem /etc/ssl/certs/1144231.pem
	I1209 05:40:54.768631 1422398 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1144231.pem
	I1209 05:40:54.772107 1422398 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 04:18 /usr/share/ca-certificates/1144231.pem
	I1209 05:40:54.772200 1422398 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1144231.pem
	I1209 05:40:54.812987 1422398 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1209 05:40:54.820239 1422398 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/1144231.pem /etc/ssl/certs/51391683.0
	I1209 05:40:54.827466 1422398 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 05:40:54.830847 1422398 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1209 05:40:54.830917 1422398 kubeadm.go:401] StartCluster: {Name:newest-cni-262540 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-262540 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 05:40:54.831012 1422398 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1209 05:40:54.831072 1422398 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 05:40:54.863416 1422398 cri.go:89] found id: ""
	I1209 05:40:54.863486 1422398 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1209 05:40:54.871043 1422398 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 05:40:54.878854 1422398 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1209 05:40:54.878952 1422398 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 05:40:54.886794 1422398 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 05:40:54.886847 1422398 kubeadm.go:158] found existing configuration files:
	
	I1209 05:40:54.886908 1422398 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1209 05:40:54.894435 1422398 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 05:40:54.894550 1422398 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 05:40:54.901704 1422398 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1209 05:40:54.909273 1422398 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 05:40:54.909385 1422398 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 05:40:54.916897 1422398 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1209 05:40:54.924926 1422398 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 05:40:54.925024 1422398 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 05:40:54.932137 1422398 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1209 05:40:54.939823 1422398 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 05:40:54.939911 1422398 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 05:40:54.947153 1422398 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1209 05:40:54.985945 1422398 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1209 05:40:54.986006 1422398 kubeadm.go:319] [preflight] Running pre-flight checks
	I1209 05:40:55.098038 1422398 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1209 05:40:55.098124 1422398 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1209 05:40:55.098168 1422398 kubeadm.go:319] OS: Linux
	I1209 05:40:55.098224 1422398 kubeadm.go:319] CGROUPS_CPU: enabled
	I1209 05:40:55.098279 1422398 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1209 05:40:55.098332 1422398 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1209 05:40:55.098392 1422398 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1209 05:40:55.098445 1422398 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1209 05:40:55.098502 1422398 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1209 05:40:55.098554 1422398 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1209 05:40:55.098607 1422398 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1209 05:40:55.098661 1422398 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1209 05:40:55.213327 1422398 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1209 05:40:55.213517 1422398 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1209 05:40:55.213698 1422398 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1209 05:40:55.232400 1422398 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1209 05:40:55.239096 1422398 out.go:252]   - Generating certificates and keys ...
	I1209 05:40:55.239277 1422398 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1209 05:40:55.239377 1422398 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1209 05:40:55.754714 1422398 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1209 05:40:56.183780 1422398 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1209 05:40:56.537089 1422398 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1209 05:40:56.838991 1422398 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1209 05:40:57.144061 1422398 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1209 05:40:57.144319 1422398 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-262540] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1209 05:40:57.237080 1422398 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1209 05:40:57.237305 1422398 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-262540] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1209 05:40:57.410307 1422398 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1209 05:40:57.494105 1422398 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1209 05:40:57.828849 1422398 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1209 05:40:57.829173 1422398 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1209 05:40:58.186047 1422398 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1209 05:40:58.553535 1422398 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1209 05:40:58.846953 1422398 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1209 05:40:59.216978 1422398 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1209 05:40:59.442501 1422398 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1209 05:40:59.443253 1422398 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1209 05:40:59.445958 1422398 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1209 05:40:59.449559 1422398 out.go:252]   - Booting up control plane ...
	I1209 05:40:59.449660 1422398 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1209 05:40:59.449739 1422398 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1209 05:40:59.449809 1422398 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1209 05:40:59.466855 1422398 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1209 05:40:59.467191 1422398 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1209 05:40:59.475169 1422398 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1209 05:40:59.475483 1422398 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1209 05:40:59.475706 1422398 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1209 05:40:59.606469 1422398 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1209 05:40:59.606609 1422398 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1209 05:43:42.940465 1404644 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001103315s
	I1209 05:43:42.940494 1404644 kubeadm.go:319] 
	I1209 05:43:42.940552 1404644 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1209 05:43:42.940585 1404644 kubeadm.go:319] 	- The kubelet is not running
	I1209 05:43:42.940690 1404644 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1209 05:43:42.940694 1404644 kubeadm.go:319] 
	I1209 05:43:42.940799 1404644 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1209 05:43:42.940831 1404644 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1209 05:43:42.940862 1404644 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1209 05:43:42.940866 1404644 kubeadm.go:319] 
	I1209 05:43:42.944449 1404644 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1209 05:43:42.944876 1404644 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1209 05:43:42.944989 1404644 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1209 05:43:42.945227 1404644 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1209 05:43:42.945235 1404644 kubeadm.go:319] 
	I1209 05:43:42.945305 1404644 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1209 05:43:42.945358 1404644 kubeadm.go:403] duration metric: took 8m7.791342576s to StartCluster
	I1209 05:43:42.945399 1404644 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:43:42.945466 1404644 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:43:42.969310 1404644 cri.go:89] found id: ""
	I1209 05:43:42.969335 1404644 logs.go:282] 0 containers: []
	W1209 05:43:42.969343 1404644 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:43:42.969349 1404644 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:43:42.969414 1404644 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:43:42.997525 1404644 cri.go:89] found id: ""
	I1209 05:43:42.997547 1404644 logs.go:282] 0 containers: []
	W1209 05:43:42.997556 1404644 logs.go:284] No container was found matching "etcd"
	I1209 05:43:42.997562 1404644 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:43:42.997619 1404644 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:43:43.022335 1404644 cri.go:89] found id: ""
	I1209 05:43:43.022360 1404644 logs.go:282] 0 containers: []
	W1209 05:43:43.022369 1404644 logs.go:284] No container was found matching "coredns"
	I1209 05:43:43.022380 1404644 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:43:43.022440 1404644 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:43:43.046700 1404644 cri.go:89] found id: ""
	I1209 05:43:43.046725 1404644 logs.go:282] 0 containers: []
	W1209 05:43:43.046734 1404644 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:43:43.046739 1404644 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:43:43.046797 1404644 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:43:43.071875 1404644 cri.go:89] found id: ""
	I1209 05:43:43.071906 1404644 logs.go:282] 0 containers: []
	W1209 05:43:43.071915 1404644 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:43:43.071921 1404644 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:43:43.071986 1404644 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:43:43.096153 1404644 cri.go:89] found id: ""
	I1209 05:43:43.096176 1404644 logs.go:282] 0 containers: []
	W1209 05:43:43.096190 1404644 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:43:43.096198 1404644 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:43:43.096259 1404644 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:43:43.121898 1404644 cri.go:89] found id: ""
	I1209 05:43:43.121922 1404644 logs.go:282] 0 containers: []
	W1209 05:43:43.121931 1404644 logs.go:284] No container was found matching "kindnet"
	I1209 05:43:43.121940 1404644 logs.go:123] Gathering logs for containerd ...
	I1209 05:43:43.121951 1404644 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:43:43.163306 1404644 logs.go:123] Gathering logs for container status ...
	I1209 05:43:43.163339 1404644 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:43:43.207532 1404644 logs.go:123] Gathering logs for kubelet ...
	I1209 05:43:43.207567 1404644 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:43:43.277243 1404644 logs.go:123] Gathering logs for dmesg ...
	I1209 05:43:43.277279 1404644 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:43:43.298477 1404644 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:43:43.298507 1404644 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:43:43.365347 1404644 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:43:43.357461    5461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:43:43.358090    5461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:43:43.359781    5461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:43:43.360115    5461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:43:43.361539    5461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:43:43.357461    5461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:43:43.358090    5461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:43:43.359781    5461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:43:43.360115    5461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:43:43.361539    5461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1209 05:43:43.365381 1404644 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001103315s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1209 05:43:43.365413 1404644 out.go:285] * 
	W1209 05:43:43.365475 1404644 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001103315s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1209 05:43:43.365493 1404644 out.go:285] * 
	W1209 05:43:43.367868 1404644 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 05:43:43.374302 1404644 out.go:203] 
	W1209 05:43:43.377044 1404644 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001103315s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1209 05:43:43.377091 1404644 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1209 05:43:43.377112 1404644 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1209 05:43:43.380387 1404644 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 09 05:35:22 no-preload-842269 containerd[758]: time="2025-12-09T05:35:22.036308692Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-scheduler:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 09 05:35:23 no-preload-842269 containerd[758]: time="2025-12-09T05:35:23.816614233Z" level=info msg="No images store for sha256:eb9020767c0d3bbd754f3f52cbe4c8bdd935dd5862604d6dc0b1f10422189544"
	Dec 09 05:35:23 no-preload-842269 containerd[758]: time="2025-12-09T05:35:23.819781105Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.0-beta.0\""
	Dec 09 05:35:23 no-preload-842269 containerd[758]: time="2025-12-09T05:35:23.827929881Z" level=info msg="ImageCreate event name:\"sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 09 05:35:23 no-preload-842269 containerd[758]: time="2025-12-09T05:35:23.828477017Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-proxy:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 09 05:35:25 no-preload-842269 containerd[758]: time="2025-12-09T05:35:25.333353027Z" level=info msg="No images store for sha256:5ed8f231f07481c657ad0e1d039921948e7abbc30ef6215465129012c4c4a508"
	Dec 09 05:35:25 no-preload-842269 containerd[758]: time="2025-12-09T05:35:25.336577825Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-beta.0\""
	Dec 09 05:35:25 no-preload-842269 containerd[758]: time="2025-12-09T05:35:25.345148784Z" level=info msg="ImageCreate event name:\"sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 09 05:35:25 no-preload-842269 containerd[758]: time="2025-12-09T05:35:25.345908041Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 09 05:35:27 no-preload-842269 containerd[758]: time="2025-12-09T05:35:27.120561243Z" level=info msg="No images store for sha256:64f3fb0a3392f487dbd4300c920f76dc3de2961e11fd6bfbedc75c0d25b1954c"
	Dec 09 05:35:27 no-preload-842269 containerd[758]: time="2025-12-09T05:35:27.122833000Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-beta.0\""
	Dec 09 05:35:27 no-preload-842269 containerd[758]: time="2025-12-09T05:35:27.131014710Z" level=info msg="ImageCreate event name:\"sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 09 05:35:27 no-preload-842269 containerd[758]: time="2025-12-09T05:35:27.132076917Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 09 05:35:28 no-preload-842269 containerd[758]: time="2025-12-09T05:35:28.575071692Z" level=info msg="No images store for sha256:5e4a4fe83792bf529a4e283e09069cf50cc9882d04168a33903ed6809a492e61"
	Dec 09 05:35:28 no-preload-842269 containerd[758]: time="2025-12-09T05:35:28.577744678Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\""
	Dec 09 05:35:28 no-preload-842269 containerd[758]: time="2025-12-09T05:35:28.588706439Z" level=info msg="ImageCreate event name:\"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 09 05:35:28 no-preload-842269 containerd[758]: time="2025-12-09T05:35:28.589547631Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 09 05:35:30 no-preload-842269 containerd[758]: time="2025-12-09T05:35:30.380855874Z" level=info msg="No images store for sha256:89a52ae86f116708cd5ba0d54dfbf2ae3011f126ee9161c4afb19bf2a51ef285"
	Dec 09 05:35:30 no-preload-842269 containerd[758]: time="2025-12-09T05:35:30.384556393Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\""
	Dec 09 05:35:30 no-preload-842269 containerd[758]: time="2025-12-09T05:35:30.398357527Z" level=info msg="ImageCreate event name:\"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 09 05:35:30 no-preload-842269 containerd[758]: time="2025-12-09T05:35:30.402958452Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 09 05:35:30 no-preload-842269 containerd[758]: time="2025-12-09T05:35:30.951034968Z" level=info msg="No images store for sha256:7475c7d18769df89a804d5bebf679dbf94886f3626f07a2be923beaa0cc7e5b0"
	Dec 09 05:35:30 no-preload-842269 containerd[758]: time="2025-12-09T05:35:30.953249078Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\""
	Dec 09 05:35:30 no-preload-842269 containerd[758]: time="2025-12-09T05:35:30.967195965Z" level=info msg="ImageCreate event name:\"sha256:66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 09 05:35:30 no-preload-842269 containerd[758]: time="2025-12-09T05:35:30.967501105Z" level=info msg="ImageUpdate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:43:46.113684    5695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:43:46.114078    5695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:43:46.115704    5695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:43:46.116275    5695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:43:46.117815    5695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +25.904254] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:14] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:16] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:18] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:19] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:30] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:32] overlayfs: idmapped layers are currently not supported
	[ +28.114653] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:33] overlayfs: idmapped layers are currently not supported
	[ +23.720849] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:34] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:35] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:36] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:37] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:38] overlayfs: idmapped layers are currently not supported
	[ +23.656275] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:39] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:57] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:58] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:00] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:02] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:03] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:05] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:06] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 9 05:31] overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	
	
	==> kernel <==
	 05:43:46 up  8:25,  0 user,  load average: 0.18, 1.31, 1.80
	Linux no-preload-842269 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 09 05:43:43 no-preload-842269 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 05:43:43 no-preload-842269 kubelet[5446]: E1209 05:43:43.286563    5446 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 09 05:43:43 no-preload-842269 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 09 05:43:43 no-preload-842269 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 09 05:43:43 no-preload-842269 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 09 05:43:43 no-preload-842269 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 05:43:43 no-preload-842269 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 05:43:44 no-preload-842269 kubelet[5480]: E1209 05:43:44.043906    5480 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 09 05:43:44 no-preload-842269 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 09 05:43:44 no-preload-842269 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 09 05:43:44 no-preload-842269 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Dec 09 05:43:44 no-preload-842269 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 05:43:44 no-preload-842269 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 05:43:44 no-preload-842269 kubelet[5577]: E1209 05:43:44.788247    5577 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 09 05:43:44 no-preload-842269 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 09 05:43:44 no-preload-842269 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 09 05:43:45 no-preload-842269 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 323.
	Dec 09 05:43:45 no-preload-842269 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 05:43:45 no-preload-842269 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 05:43:45 no-preload-842269 kubelet[5610]: E1209 05:43:45.503710    5610 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 09 05:43:45 no-preload-842269 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 09 05:43:45 no-preload-842269 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 09 05:43:46 no-preload-842269 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 324.
	Dec 09 05:43:46 no-preload-842269 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 05:43:46 no-preload-842269 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-842269 -n no-preload-842269
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-842269 -n no-preload-842269: exit status 6 (341.147231ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1209 05:43:46.601460 1427398 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-842269" does not appear in /home/jenkins/minikube-integration/22081-1142328/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "no-preload-842269" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-842269
helpers_test.go:243: (dbg) docker inspect no-preload-842269:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9789b34a5453b154c4ceca4f0038c1d7948d7f0f72f334a114a8452d803ad415",
	        "Created": "2025-12-09T05:35:10.617601088Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1404960,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-09T05:35:10.694361506Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:e4eb91ed18a24161fce60c7cdd660144ecd5b8c5029dc2dea2c5e423c2f48ce4",
	        "ResolvConfPath": "/var/lib/docker/containers/9789b34a5453b154c4ceca4f0038c1d7948d7f0f72f334a114a8452d803ad415/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9789b34a5453b154c4ceca4f0038c1d7948d7f0f72f334a114a8452d803ad415/hostname",
	        "HostsPath": "/var/lib/docker/containers/9789b34a5453b154c4ceca4f0038c1d7948d7f0f72f334a114a8452d803ad415/hosts",
	        "LogPath": "/var/lib/docker/containers/9789b34a5453b154c4ceca4f0038c1d7948d7f0f72f334a114a8452d803ad415/9789b34a5453b154c4ceca4f0038c1d7948d7f0f72f334a114a8452d803ad415-json.log",
	        "Name": "/no-preload-842269",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-842269:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-842269",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "9789b34a5453b154c4ceca4f0038c1d7948d7f0f72f334a114a8452d803ad415",
	                "LowerDir": "/var/lib/docker/overlay2/658f95b1f330fcb0d4774e9611c50218d409d0a889583e0050325b4fe479e9f7-init/diff:/var/lib/docker/overlay2/c44bb57aa59cc265266f37f2bb6e7ec0e7d641c3b4aeaa57e6d23deec6f0d1d4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/658f95b1f330fcb0d4774e9611c50218d409d0a889583e0050325b4fe479e9f7/merged",
	                "UpperDir": "/var/lib/docker/overlay2/658f95b1f330fcb0d4774e9611c50218d409d0a889583e0050325b4fe479e9f7/diff",
	                "WorkDir": "/var/lib/docker/overlay2/658f95b1f330fcb0d4774e9611c50218d409d0a889583e0050325b4fe479e9f7/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-842269",
	                "Source": "/var/lib/docker/volumes/no-preload-842269/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-842269",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-842269",
	                "name.minikube.sigs.k8s.io": "no-preload-842269",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c8d638bf0ac3f8de516cba00d80a3b149af62367900ced69943b89e3e7924db8",
	            "SandboxKey": "/var/run/docker/netns/c8d638bf0ac3",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34185"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34186"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34189"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34187"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34188"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-842269": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "4e:5c:05:82:25:f0",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6461bd7226e5723487f325bf78054dc63f1dafa2831abe7b44a8cc288dfa4456",
	                    "EndpointID": "5bccd85f7c02ee9bc4903397b85755d423fd035b5d120846d74ca8550b415301",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-842269",
	                        "9789b34a5453"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-842269 -n no-preload-842269
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-842269 -n no-preload-842269: exit status 6 (308.398502ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1209 05:43:46.925745 1427473 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-842269" does not appear in /home/jenkins/minikube-integration/22081-1142328/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-842269 logs -n 25
helpers_test.go:260: TestStartStop/group/no-preload/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─
────────────────────┐
	│ COMMAND │                                                                                                                            ARGS                                                                                                                            │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─
────────────────────┤
	│ start   │ -p embed-certs-432108 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-432108           │ jenkins │ v1.37.0 │ 09 Dec 25 05:34 UTC │ 09 Dec 25 05:36 UTC │
	│ start   │ -p cert-expiration-074045 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd                                                                                                                                            │ cert-expiration-074045       │ jenkins │ v1.37.0 │ 09 Dec 25 05:34 UTC │ 09 Dec 25 05:35 UTC │
	│ delete  │ -p cert-expiration-074045                                                                                                                                                                                                                                  │ cert-expiration-074045       │ jenkins │ v1.37.0 │ 09 Dec 25 05:35 UTC │ 09 Dec 25 05:35 UTC │
	│ delete  │ -p disable-driver-mounts-094940                                                                                                                                                                                                                            │ disable-driver-mounts-094940 │ jenkins │ v1.37.0 │ 09 Dec 25 05:35 UTC │ 09 Dec 25 05:35 UTC │
	│ start   │ -p no-preload-842269 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-842269            │ jenkins │ v1.37.0 │ 09 Dec 25 05:35 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-432108 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                   │ embed-certs-432108           │ jenkins │ v1.37.0 │ 09 Dec 25 05:36 UTC │ 09 Dec 25 05:36 UTC │
	│ stop    │ -p embed-certs-432108 --alsologtostderr -v=3                                                                                                                                                                                                               │ embed-certs-432108           │ jenkins │ v1.37.0 │ 09 Dec 25 05:36 UTC │ 09 Dec 25 05:36 UTC │
	│ addons  │ enable dashboard -p embed-certs-432108 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                              │ embed-certs-432108           │ jenkins │ v1.37.0 │ 09 Dec 25 05:36 UTC │ 09 Dec 25 05:36 UTC │
	│ start   │ -p embed-certs-432108 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-432108           │ jenkins │ v1.37.0 │ 09 Dec 25 05:36 UTC │ 09 Dec 25 05:37 UTC │
	│ image   │ embed-certs-432108 image list --format=json                                                                                                                                                                                                                │ embed-certs-432108           │ jenkins │ v1.37.0 │ 09 Dec 25 05:37 UTC │ 09 Dec 25 05:37 UTC │
	│ pause   │ -p embed-certs-432108 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-432108           │ jenkins │ v1.37.0 │ 09 Dec 25 05:37 UTC │ 09 Dec 25 05:37 UTC │
	│ unpause │ -p embed-certs-432108 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-432108           │ jenkins │ v1.37.0 │ 09 Dec 25 05:37 UTC │ 09 Dec 25 05:37 UTC │
	│ delete  │ -p embed-certs-432108                                                                                                                                                                                                                                      │ embed-certs-432108           │ jenkins │ v1.37.0 │ 09 Dec 25 05:37 UTC │ 09 Dec 25 05:37 UTC │
	│ delete  │ -p embed-certs-432108                                                                                                                                                                                                                                      │ embed-certs-432108           │ jenkins │ v1.37.0 │ 09 Dec 25 05:37 UTC │ 09 Dec 25 05:37 UTC │
	│ start   │ -p default-k8s-diff-port-564611 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-564611 │ jenkins │ v1.37.0 │ 09 Dec 25 05:37 UTC │ 09 Dec 25 05:39 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-564611 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                         │ default-k8s-diff-port-564611 │ jenkins │ v1.37.0 │ 09 Dec 25 05:39 UTC │ 09 Dec 25 05:39 UTC │
	│ stop    │ -p default-k8s-diff-port-564611 --alsologtostderr -v=3                                                                                                                                                                                                     │ default-k8s-diff-port-564611 │ jenkins │ v1.37.0 │ 09 Dec 25 05:39 UTC │ 09 Dec 25 05:39 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-564611 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ default-k8s-diff-port-564611 │ jenkins │ v1.37.0 │ 09 Dec 25 05:39 UTC │ 09 Dec 25 05:39 UTC │
	│ start   │ -p default-k8s-diff-port-564611 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-564611 │ jenkins │ v1.37.0 │ 09 Dec 25 05:39 UTC │ 09 Dec 25 05:40 UTC │
	│ image   │ default-k8s-diff-port-564611 image list --format=json                                                                                                                                                                                                      │ default-k8s-diff-port-564611 │ jenkins │ v1.37.0 │ 09 Dec 25 05:40 UTC │ 09 Dec 25 05:40 UTC │
	│ pause   │ -p default-k8s-diff-port-564611 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-564611 │ jenkins │ v1.37.0 │ 09 Dec 25 05:40 UTC │ 09 Dec 25 05:40 UTC │
	│ unpause │ -p default-k8s-diff-port-564611 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-564611 │ jenkins │ v1.37.0 │ 09 Dec 25 05:40 UTC │ 09 Dec 25 05:40 UTC │
	│ delete  │ -p default-k8s-diff-port-564611                                                                                                                                                                                                                            │ default-k8s-diff-port-564611 │ jenkins │ v1.37.0 │ 09 Dec 25 05:40 UTC │ 09 Dec 25 05:40 UTC │
	│ delete  │ -p default-k8s-diff-port-564611                                                                                                                                                                                                                            │ default-k8s-diff-port-564611 │ jenkins │ v1.37.0 │ 09 Dec 25 05:40 UTC │ 09 Dec 25 05:40 UTC │
	│ start   │ -p newest-cni-262540 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-262540            │ jenkins │ v1.37.0 │ 09 Dec 25 05:40 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─
────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/09 05:40:41
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 05:40:41.014166 1422398 out.go:360] Setting OutFile to fd 1 ...
	I1209 05:40:41.014346 1422398 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 05:40:41.014376 1422398 out.go:374] Setting ErrFile to fd 2...
	I1209 05:40:41.014403 1422398 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 05:40:41.014777 1422398 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-1142328/.minikube/bin
	I1209 05:40:41.015346 1422398 out.go:368] Setting JSON to false
	I1209 05:40:41.016651 1422398 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":30164,"bootTime":1765228677,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1209 05:40:41.016752 1422398 start.go:143] virtualization:  
	I1209 05:40:41.020737 1422398 out.go:179] * [newest-cni-262540] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1209 05:40:41.025100 1422398 out.go:179]   - MINIKUBE_LOCATION=22081
	I1209 05:40:41.025177 1422398 notify.go:221] Checking for updates...
	I1209 05:40:41.031377 1422398 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 05:40:41.034527 1422398 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22081-1142328/kubeconfig
	I1209 05:40:41.037660 1422398 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-1142328/.minikube
	I1209 05:40:41.040646 1422398 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1209 05:40:41.043555 1422398 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 05:40:41.047098 1422398 config.go:182] Loaded profile config "no-preload-842269": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1209 05:40:41.047203 1422398 driver.go:422] Setting default libvirt URI to qemu:///system
	I1209 05:40:41.082759 1422398 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1209 05:40:41.082877 1422398 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 05:40:41.141221 1422398 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-09 05:40:41.131267754 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1209 05:40:41.141331 1422398 docker.go:319] overlay module found
	I1209 05:40:41.144673 1422398 out.go:179] * Using the docker driver based on user configuration
	I1209 05:40:41.147595 1422398 start.go:309] selected driver: docker
	I1209 05:40:41.147618 1422398 start.go:927] validating driver "docker" against <nil>
	I1209 05:40:41.147633 1422398 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 05:40:41.148480 1422398 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 05:40:41.205051 1422398 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-09 05:40:41.195808894 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1209 05:40:41.205216 1422398 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1209 05:40:41.205249 1422398 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1209 05:40:41.205488 1422398 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1209 05:40:41.208233 1422398 out.go:179] * Using Docker driver with root privileges
	I1209 05:40:41.211172 1422398 cni.go:84] Creating CNI manager for ""
	I1209 05:40:41.211250 1422398 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1209 05:40:41.211263 1422398 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1209 05:40:41.211347 1422398 start.go:353] cluster config:
	{Name:newest-cni-262540 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-262540 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 05:40:41.214410 1422398 out.go:179] * Starting "newest-cni-262540" primary control-plane node in "newest-cni-262540" cluster
	I1209 05:40:41.217388 1422398 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1209 05:40:41.220416 1422398 out.go:179] * Pulling base image v0.0.48-1765184860-22066 ...
	I1209 05:40:41.223240 1422398 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1209 05:40:41.223288 1422398 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1209 05:40:41.223310 1422398 cache.go:65] Caching tarball of preloaded images
	I1209 05:40:41.223322 1422398 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local docker daemon
	I1209 05:40:41.223405 1422398 preload.go:238] Found /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1209 05:40:41.223416 1422398 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1209 05:40:41.223520 1422398 profile.go:143] Saving config to /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/config.json ...
	I1209 05:40:41.223546 1422398 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/config.json: {Name:mk3f2f0447b25b9c02ca47937d45ed297c23b284 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 05:40:41.242533 1422398 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local docker daemon, skipping pull
	I1209 05:40:41.242556 1422398 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c exists in daemon, skipping load
	I1209 05:40:41.242574 1422398 cache.go:243] Successfully downloaded all kic artifacts
	I1209 05:40:41.242607 1422398 start.go:360] acquireMachinesLock for newest-cni-262540: {Name:mk272d84ff1bc8c8949f2f0b1f608a7519899d10 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 05:40:41.242722 1422398 start.go:364] duration metric: took 94.012µs to acquireMachinesLock for "newest-cni-262540"
	I1209 05:40:41.242752 1422398 start.go:93] Provisioning new machine with config: &{Name:newest-cni-262540 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-262540 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1209 05:40:41.242832 1422398 start.go:125] createHost starting for "" (driver="docker")
	I1209 05:40:41.246278 1422398 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1209 05:40:41.246513 1422398 start.go:159] libmachine.API.Create for "newest-cni-262540" (driver="docker")
	I1209 05:40:41.246549 1422398 client.go:173] LocalClient.Create starting
	I1209 05:40:41.246618 1422398 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem
	I1209 05:40:41.246653 1422398 main.go:143] libmachine: Decoding PEM data...
	I1209 05:40:41.246672 1422398 main.go:143] libmachine: Parsing certificate...
	I1209 05:40:41.246730 1422398 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/cert.pem
	I1209 05:40:41.246753 1422398 main.go:143] libmachine: Decoding PEM data...
	I1209 05:40:41.246765 1422398 main.go:143] libmachine: Parsing certificate...
	I1209 05:40:41.247138 1422398 cli_runner.go:164] Run: docker network inspect newest-cni-262540 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1209 05:40:41.262988 1422398 cli_runner.go:211] docker network inspect newest-cni-262540 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1209 05:40:41.263073 1422398 network_create.go:284] running [docker network inspect newest-cni-262540] to gather additional debugging logs...
	I1209 05:40:41.263095 1422398 cli_runner.go:164] Run: docker network inspect newest-cni-262540
	W1209 05:40:41.279120 1422398 cli_runner.go:211] docker network inspect newest-cni-262540 returned with exit code 1
	I1209 05:40:41.279154 1422398 network_create.go:287] error running [docker network inspect newest-cni-262540]: docker network inspect newest-cni-262540: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-262540 not found
	I1209 05:40:41.279168 1422398 network_create.go:289] output of [docker network inspect newest-cni-262540]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-262540 not found
	
	** /stderr **
	I1209 05:40:41.279286 1422398 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1209 05:40:41.295748 1422398 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-7a15eec16b1a IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:8a:b7:58:bc:12:6c} reservation:<nil>}
	I1209 05:40:41.296192 1422398 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-fcb9e6b38e8e IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:56:c3:7a:b4:06:4b} reservation:<nil>}
	I1209 05:40:41.296445 1422398 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-8c1346c67d6b IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:82:10:14:75:55:fb} reservation:<nil>}
	I1209 05:40:41.296875 1422398 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019e80f0}
	I1209 05:40:41.296895 1422398 network_create.go:124] attempt to create docker network newest-cni-262540 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1209 05:40:41.296949 1422398 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-262540 newest-cni-262540
	I1209 05:40:41.356493 1422398 network_create.go:108] docker network newest-cni-262540 192.168.76.0/24 created
	I1209 05:40:41.356525 1422398 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-262540" container
	I1209 05:40:41.356609 1422398 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1209 05:40:41.372493 1422398 cli_runner.go:164] Run: docker volume create newest-cni-262540 --label name.minikube.sigs.k8s.io=newest-cni-262540 --label created_by.minikube.sigs.k8s.io=true
	I1209 05:40:41.390479 1422398 oci.go:103] Successfully created a docker volume newest-cni-262540
	I1209 05:40:41.390571 1422398 cli_runner.go:164] Run: docker run --rm --name newest-cni-262540-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-262540 --entrypoint /usr/bin/test -v newest-cni-262540:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c -d /var/lib
	I1209 05:40:41.957365 1422398 oci.go:107] Successfully prepared a docker volume newest-cni-262540
	I1209 05:40:41.957440 1422398 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1209 05:40:41.957454 1422398 kic.go:194] Starting extracting preloaded images to volume ...
	I1209 05:40:41.957523 1422398 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-262540:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c -I lz4 -xf /preloaded.tar -C /extractDir
	I1209 05:40:46.577478 1422398 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-262540:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c -I lz4 -xf /preloaded.tar -C /extractDir: (4.619919939s)
	I1209 05:40:46.577511 1422398 kic.go:203] duration metric: took 4.620053703s to extract preloaded images to volume ...
	W1209 05:40:46.577655 1422398 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1209 05:40:46.577765 1422398 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1209 05:40:46.641962 1422398 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-262540 --name newest-cni-262540 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-262540 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-262540 --network newest-cni-262540 --ip 192.168.76.2 --volume newest-cni-262540:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c
	I1209 05:40:46.963179 1422398 cli_runner.go:164] Run: docker container inspect newest-cni-262540 --format={{.State.Running}}
	I1209 05:40:46.990367 1422398 cli_runner.go:164] Run: docker container inspect newest-cni-262540 --format={{.State.Status}}
	I1209 05:40:47.023042 1422398 cli_runner.go:164] Run: docker exec newest-cni-262540 stat /var/lib/dpkg/alternatives/iptables
	I1209 05:40:47.074649 1422398 oci.go:144] the created container "newest-cni-262540" has a running status.
	I1209 05:40:47.074676 1422398 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22081-1142328/.minikube/machines/newest-cni-262540/id_rsa...
	I1209 05:40:47.692225 1422398 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22081-1142328/.minikube/machines/newest-cni-262540/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1209 05:40:47.718517 1422398 cli_runner.go:164] Run: docker container inspect newest-cni-262540 --format={{.State.Status}}
	I1209 05:40:47.740875 1422398 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1209 05:40:47.740894 1422398 kic_runner.go:114] Args: [docker exec --privileged newest-cni-262540 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1209 05:40:47.780644 1422398 cli_runner.go:164] Run: docker container inspect newest-cni-262540 --format={{.State.Status}}
	I1209 05:40:47.797898 1422398 machine.go:94] provisionDockerMachine start ...
	I1209 05:40:47.798001 1422398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-262540
	I1209 05:40:47.813940 1422398 main.go:143] libmachine: Using SSH client type: native
	I1209 05:40:47.814280 1422398 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 34205 <nil> <nil>}
	I1209 05:40:47.814295 1422398 main.go:143] libmachine: About to run SSH command:
	hostname
	I1209 05:40:47.814927 1422398 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1209 05:40:50.967418 1422398 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-262540
	
	I1209 05:40:50.967444 1422398 ubuntu.go:182] provisioning hostname "newest-cni-262540"
	I1209 05:40:50.967507 1422398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-262540
	I1209 05:40:50.983898 1422398 main.go:143] libmachine: Using SSH client type: native
	I1209 05:40:50.984244 1422398 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 34205 <nil> <nil>}
	I1209 05:40:50.984261 1422398 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-262540 && echo "newest-cni-262540" | sudo tee /etc/hostname
	I1209 05:40:51.158163 1422398 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-262540
	
	I1209 05:40:51.158329 1422398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-262540
	I1209 05:40:51.176198 1422398 main.go:143] libmachine: Using SSH client type: native
	I1209 05:40:51.176519 1422398 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 34205 <nil> <nil>}
	I1209 05:40:51.176535 1422398 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-262540' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-262540/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-262540' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 05:40:51.328246 1422398 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1209 05:40:51.328276 1422398 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22081-1142328/.minikube CaCertPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22081-1142328/.minikube}
	I1209 05:40:51.328348 1422398 ubuntu.go:190] setting up certificates
	I1209 05:40:51.328357 1422398 provision.go:84] configureAuth start
	I1209 05:40:51.328443 1422398 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-262540
	I1209 05:40:51.345620 1422398 provision.go:143] copyHostCerts
	I1209 05:40:51.345692 1422398 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-1142328/.minikube/key.pem, removing ...
	I1209 05:40:51.345702 1422398 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-1142328/.minikube/key.pem
	I1209 05:40:51.345782 1422398 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22081-1142328/.minikube/key.pem (1675 bytes)
	I1209 05:40:51.345892 1422398 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.pem, removing ...
	I1209 05:40:51.345903 1422398 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.pem
	I1209 05:40:51.345937 1422398 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.pem (1078 bytes)
	I1209 05:40:51.345995 1422398 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-1142328/.minikube/cert.pem, removing ...
	I1209 05:40:51.346004 1422398 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-1142328/.minikube/cert.pem
	I1209 05:40:51.346028 1422398 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22081-1142328/.minikube/cert.pem (1123 bytes)
	I1209 05:40:51.346078 1422398 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca-key.pem org=jenkins.newest-cni-262540 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-262540]
	I1209 05:40:51.459612 1422398 provision.go:177] copyRemoteCerts
	I1209 05:40:51.459736 1422398 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 05:40:51.459804 1422398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-262540
	I1209 05:40:51.477068 1422398 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34205 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/newest-cni-262540/id_rsa Username:docker}
	I1209 05:40:51.583430 1422398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1209 05:40:51.599930 1422398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1209 05:40:51.616188 1422398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1209 05:40:51.632654 1422398 provision.go:87] duration metric: took 304.27698ms to configureAuth
	I1209 05:40:51.632690 1422398 ubuntu.go:206] setting minikube options for container-runtime
	I1209 05:40:51.632889 1422398 config.go:182] Loaded profile config "newest-cni-262540": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1209 05:40:51.632903 1422398 machine.go:97] duration metric: took 3.834981835s to provisionDockerMachine
	I1209 05:40:51.632910 1422398 client.go:176] duration metric: took 10.386351456s to LocalClient.Create
	I1209 05:40:51.632935 1422398 start.go:167] duration metric: took 10.386419491s to libmachine.API.Create "newest-cni-262540"
	I1209 05:40:51.632946 1422398 start.go:293] postStartSetup for "newest-cni-262540" (driver="docker")
	I1209 05:40:51.632957 1422398 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 05:40:51.633024 1422398 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 05:40:51.633069 1422398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-262540
	I1209 05:40:51.648788 1422398 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34205 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/newest-cni-262540/id_rsa Username:docker}
	I1209 05:40:51.751770 1422398 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 05:40:51.754890 1422398 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1209 05:40:51.754915 1422398 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1209 05:40:51.754931 1422398 filesync.go:126] Scanning /home/jenkins/minikube-integration/22081-1142328/.minikube/addons for local assets ...
	I1209 05:40:51.754996 1422398 filesync.go:126] Scanning /home/jenkins/minikube-integration/22081-1142328/.minikube/files for local assets ...
	I1209 05:40:51.755088 1422398 filesync.go:149] local asset: /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/ssl/certs/11442312.pem -> 11442312.pem in /etc/ssl/certs
	I1209 05:40:51.755194 1422398 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 05:40:51.762311 1422398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/ssl/certs/11442312.pem --> /etc/ssl/certs/11442312.pem (1708 bytes)
	I1209 05:40:51.778994 1422398 start.go:296] duration metric: took 146.033857ms for postStartSetup
	I1209 05:40:51.779431 1422398 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-262540
	I1209 05:40:51.798065 1422398 profile.go:143] Saving config to /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/config.json ...
	I1209 05:40:51.798353 1422398 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1209 05:40:51.798402 1422398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-262540
	I1209 05:40:51.814583 1422398 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34205 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/newest-cni-262540/id_rsa Username:docker}
	I1209 05:40:51.917312 1422398 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1209 05:40:51.922304 1422398 start.go:128] duration metric: took 10.679457533s to createHost
	I1209 05:40:51.922328 1422398 start.go:83] releasing machines lock for "newest-cni-262540", held for 10.67959362s
	I1209 05:40:51.922409 1422398 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-262540
	I1209 05:40:51.939569 1422398 ssh_runner.go:195] Run: cat /version.json
	I1209 05:40:51.939636 1422398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-262540
	I1209 05:40:51.939638 1422398 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 05:40:51.939698 1422398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-262540
	I1209 05:40:51.960875 1422398 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34205 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/newest-cni-262540/id_rsa Username:docker}
	I1209 05:40:51.963453 1422398 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34205 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/newest-cni-262540/id_rsa Username:docker}
	I1209 05:40:52.063736 1422398 ssh_runner.go:195] Run: systemctl --version
	I1209 05:40:52.156351 1422398 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 05:40:52.160600 1422398 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 05:40:52.160672 1422398 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 05:40:52.187388 1422398 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1209 05:40:52.187415 1422398 start.go:496] detecting cgroup driver to use...
	I1209 05:40:52.187446 1422398 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1209 05:40:52.187504 1422398 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1209 05:40:52.203080 1422398 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1209 05:40:52.215843 1422398 docker.go:218] disabling cri-docker service (if available) ...
	I1209 05:40:52.215908 1422398 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 05:40:52.232148 1422398 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 05:40:52.250032 1422398 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 05:40:52.358548 1422398 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 05:40:52.481614 1422398 docker.go:234] disabling docker service ...
	I1209 05:40:52.481725 1422398 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 05:40:52.502779 1422398 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 05:40:52.515525 1422398 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 05:40:52.630357 1422398 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 05:40:52.754667 1422398 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 05:40:52.769286 1422398 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 05:40:52.785364 1422398 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1209 05:40:52.794252 1422398 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1209 05:40:52.803528 1422398 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1209 05:40:52.803619 1422398 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1209 05:40:52.812544 1422398 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1209 05:40:52.820837 1422398 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1209 05:40:52.829672 1422398 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1209 05:40:52.838554 1422398 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 05:40:52.846308 1422398 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1209 05:40:52.854529 1422398 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1209 05:40:52.863150 1422398 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1209 05:40:52.871579 1422398 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 05:40:52.878758 1422398 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 05:40:52.886006 1422398 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 05:40:53.012110 1422398 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1209 05:40:53.145258 1422398 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1209 05:40:53.145356 1422398 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1209 05:40:53.148998 1422398 start.go:564] Will wait 60s for crictl version
	I1209 05:40:53.149063 1422398 ssh_runner.go:195] Run: which crictl
	I1209 05:40:53.152446 1422398 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1209 05:40:53.177386 1422398 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1209 05:40:53.177452 1422398 ssh_runner.go:195] Run: containerd --version
	I1209 05:40:53.199507 1422398 ssh_runner.go:195] Run: containerd --version
	I1209 05:40:53.225320 1422398 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1209 05:40:53.228305 1422398 cli_runner.go:164] Run: docker network inspect newest-cni-262540 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1209 05:40:53.243962 1422398 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1209 05:40:53.247757 1422398 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 05:40:53.260215 1422398 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1209 05:40:53.262990 1422398 kubeadm.go:884] updating cluster {Name:newest-cni-262540 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-262540 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 05:40:53.263149 1422398 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1209 05:40:53.263229 1422398 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 05:40:53.289432 1422398 containerd.go:627] all images are preloaded for containerd runtime.
	I1209 05:40:53.289455 1422398 containerd.go:534] Images already preloaded, skipping extraction
	I1209 05:40:53.289546 1422398 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 05:40:53.312520 1422398 containerd.go:627] all images are preloaded for containerd runtime.
	I1209 05:40:53.312544 1422398 cache_images.go:86] Images are preloaded, skipping loading
	I1209 05:40:53.312552 1422398 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1209 05:40:53.312646 1422398 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-262540 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-262540 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 05:40:53.312713 1422398 ssh_runner.go:195] Run: sudo crictl info
	I1209 05:40:53.337527 1422398 cni.go:84] Creating CNI manager for ""
	I1209 05:40:53.337552 1422398 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1209 05:40:53.337571 1422398 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1209 05:40:53.337595 1422398 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-262540 NodeName:newest-cni-262540 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1209 05:40:53.337729 1422398 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-262540"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 05:40:53.337802 1422398 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1209 05:40:53.345447 1422398 binaries.go:51] Found k8s binaries, skipping transfer
	I1209 05:40:53.345517 1422398 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1209 05:40:53.352930 1422398 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1209 05:40:53.365409 1422398 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1209 05:40:53.377954 1422398 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1209 05:40:53.391187 1422398 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1209 05:40:53.394878 1422398 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 05:40:53.404484 1422398 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 05:40:53.509615 1422398 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 05:40:53.532992 1422398 certs.go:69] Setting up /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540 for IP: 192.168.76.2
	I1209 05:40:53.533014 1422398 certs.go:195] generating shared ca certs ...
	I1209 05:40:53.533065 1422398 certs.go:227] acquiring lock for ca certs: {Name:mk15788702f8c4e23b5aeab3f44961d296fab259 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 05:40:53.533239 1422398 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.key
	I1209 05:40:53.533305 1422398 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/proxy-client-ca.key
	I1209 05:40:53.533322 1422398 certs.go:257] generating profile certs ...
	I1209 05:40:53.533397 1422398 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/client.key
	I1209 05:40:53.533414 1422398 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/client.crt with IP's: []
	I1209 05:40:53.604706 1422398 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/client.crt ...
	I1209 05:40:53.604742 1422398 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/client.crt: {Name:mk908e1c63967383d20a56065c79b4bc0877c829 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 05:40:53.604954 1422398 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/client.key ...
	I1209 05:40:53.604968 1422398 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/client.key: {Name:mk0782d8c9cde6107bc905e7c1ffdb2b8a8e707c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 05:40:53.605064 1422398 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/apiserver.key.0ed49b31
	I1209 05:40:53.605085 1422398 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/apiserver.crt.0ed49b31 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1209 05:40:53.850901 1422398 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/apiserver.crt.0ed49b31 ...
	I1209 05:40:53.850943 1422398 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/apiserver.crt.0ed49b31: {Name:mkd1e6249eaef6a320629a45c3aa63c6b2fe9252 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 05:40:53.851131 1422398 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/apiserver.key.0ed49b31 ...
	I1209 05:40:53.851147 1422398 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/apiserver.key.0ed49b31: {Name:mk9df2970f8e62123fc8a73f846dec85a46dbe82 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 05:40:53.851239 1422398 certs.go:382] copying /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/apiserver.crt.0ed49b31 -> /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/apiserver.crt
	I1209 05:40:53.851366 1422398 certs.go:386] copying /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/apiserver.key.0ed49b31 -> /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/apiserver.key
	I1209 05:40:53.851432 1422398 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/proxy-client.key
	I1209 05:40:53.851456 1422398 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/proxy-client.crt with IP's: []
	I1209 05:40:54.332232 1422398 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/proxy-client.crt ...
	I1209 05:40:54.332268 1422398 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/proxy-client.crt: {Name:mk86c5c1261e1f4a7a13e3996ae202e7dfe017ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 05:40:54.332465 1422398 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/proxy-client.key ...
	I1209 05:40:54.332479 1422398 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/proxy-client.key: {Name:mk2b143aa140867219200e00888917dfd6928724 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 05:40:54.332672 1422398 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/1144231.pem (1338 bytes)
	W1209 05:40:54.332718 1422398 certs.go:480] ignoring /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/1144231_empty.pem, impossibly tiny 0 bytes
	I1209 05:40:54.332732 1422398 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca-key.pem (1679 bytes)
	I1209 05:40:54.332759 1422398 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem (1078 bytes)
	I1209 05:40:54.332787 1422398 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/cert.pem (1123 bytes)
	I1209 05:40:54.332816 1422398 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/key.pem (1675 bytes)
	I1209 05:40:54.332865 1422398 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/ssl/certs/11442312.pem (1708 bytes)
	I1209 05:40:54.333451 1422398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 05:40:54.351622 1422398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1209 05:40:54.369353 1422398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 05:40:54.386962 1422398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1209 05:40:54.405322 1422398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1209 05:40:54.422647 1422398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1209 05:40:54.483231 1422398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 05:40:54.515176 1422398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1209 05:40:54.533753 1422398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/ssl/certs/11442312.pem --> /usr/share/ca-certificates/11442312.pem (1708 bytes)
	I1209 05:40:54.552730 1422398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 05:40:54.570021 1422398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/1144231.pem --> /usr/share/ca-certificates/1144231.pem (1338 bytes)
	I1209 05:40:54.587455 1422398 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 05:40:54.600371 1422398 ssh_runner.go:195] Run: openssl version
	I1209 05:40:54.606642 1422398 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/11442312.pem
	I1209 05:40:54.613904 1422398 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/11442312.pem /etc/ssl/certs/11442312.pem
	I1209 05:40:54.621395 1422398 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11442312.pem
	I1209 05:40:54.624932 1422398 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 04:18 /usr/share/ca-certificates/11442312.pem
	I1209 05:40:54.625005 1422398 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11442312.pem
	I1209 05:40:54.665847 1422398 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1209 05:40:54.673355 1422398 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/11442312.pem /etc/ssl/certs/3ec20f2e.0
	I1209 05:40:54.680386 1422398 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1209 05:40:54.687518 1422398 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1209 05:40:54.694760 1422398 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 05:40:54.698200 1422398 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 04:09 /usr/share/ca-certificates/minikubeCA.pem
	I1209 05:40:54.698275 1422398 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 05:40:54.739105 1422398 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1209 05:40:54.746468 1422398 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1209 05:40:54.753754 1422398 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1144231.pem
	I1209 05:40:54.761267 1422398 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1144231.pem /etc/ssl/certs/1144231.pem
	I1209 05:40:54.768631 1422398 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1144231.pem
	I1209 05:40:54.772107 1422398 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 04:18 /usr/share/ca-certificates/1144231.pem
	I1209 05:40:54.772200 1422398 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1144231.pem
	I1209 05:40:54.812987 1422398 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1209 05:40:54.820239 1422398 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/1144231.pem /etc/ssl/certs/51391683.0
	I1209 05:40:54.827466 1422398 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 05:40:54.830847 1422398 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1209 05:40:54.830917 1422398 kubeadm.go:401] StartCluster: {Name:newest-cni-262540 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-262540 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 05:40:54.831012 1422398 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1209 05:40:54.831072 1422398 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 05:40:54.863416 1422398 cri.go:89] found id: ""
	I1209 05:40:54.863486 1422398 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1209 05:40:54.871043 1422398 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 05:40:54.878854 1422398 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1209 05:40:54.878952 1422398 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 05:40:54.886794 1422398 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 05:40:54.886847 1422398 kubeadm.go:158] found existing configuration files:
	
	I1209 05:40:54.886908 1422398 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1209 05:40:54.894435 1422398 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 05:40:54.894550 1422398 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 05:40:54.901704 1422398 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1209 05:40:54.909273 1422398 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 05:40:54.909385 1422398 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 05:40:54.916897 1422398 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1209 05:40:54.924926 1422398 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 05:40:54.925024 1422398 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 05:40:54.932137 1422398 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1209 05:40:54.939823 1422398 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 05:40:54.939911 1422398 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 05:40:54.947153 1422398 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1209 05:40:54.985945 1422398 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1209 05:40:54.986006 1422398 kubeadm.go:319] [preflight] Running pre-flight checks
	I1209 05:40:55.098038 1422398 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1209 05:40:55.098124 1422398 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1209 05:40:55.098168 1422398 kubeadm.go:319] OS: Linux
	I1209 05:40:55.098224 1422398 kubeadm.go:319] CGROUPS_CPU: enabled
	I1209 05:40:55.098279 1422398 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1209 05:40:55.098332 1422398 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1209 05:40:55.098392 1422398 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1209 05:40:55.098445 1422398 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1209 05:40:55.098502 1422398 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1209 05:40:55.098554 1422398 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1209 05:40:55.098607 1422398 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1209 05:40:55.098661 1422398 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1209 05:40:55.213327 1422398 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1209 05:40:55.213517 1422398 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1209 05:40:55.213698 1422398 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1209 05:40:55.232400 1422398 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1209 05:40:55.239096 1422398 out.go:252]   - Generating certificates and keys ...
	I1209 05:40:55.239277 1422398 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1209 05:40:55.239377 1422398 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1209 05:40:55.754714 1422398 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1209 05:40:56.183780 1422398 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1209 05:40:56.537089 1422398 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1209 05:40:56.838991 1422398 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1209 05:40:57.144061 1422398 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1209 05:40:57.144319 1422398 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-262540] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1209 05:40:57.237080 1422398 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1209 05:40:57.237305 1422398 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-262540] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1209 05:40:57.410307 1422398 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1209 05:40:57.494105 1422398 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1209 05:40:57.828849 1422398 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1209 05:40:57.829173 1422398 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1209 05:40:58.186047 1422398 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1209 05:40:58.553535 1422398 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1209 05:40:58.846953 1422398 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1209 05:40:59.216978 1422398 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1209 05:40:59.442501 1422398 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1209 05:40:59.443253 1422398 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1209 05:40:59.445958 1422398 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1209 05:40:59.449559 1422398 out.go:252]   - Booting up control plane ...
	I1209 05:40:59.449660 1422398 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1209 05:40:59.449739 1422398 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1209 05:40:59.449809 1422398 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1209 05:40:59.466855 1422398 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1209 05:40:59.467191 1422398 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1209 05:40:59.475169 1422398 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1209 05:40:59.475483 1422398 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1209 05:40:59.475706 1422398 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1209 05:40:59.606469 1422398 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1209 05:40:59.606609 1422398 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1209 05:43:42.940465 1404644 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001103315s
	I1209 05:43:42.940494 1404644 kubeadm.go:319] 
	I1209 05:43:42.940552 1404644 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1209 05:43:42.940585 1404644 kubeadm.go:319] 	- The kubelet is not running
	I1209 05:43:42.940690 1404644 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1209 05:43:42.940694 1404644 kubeadm.go:319] 
	I1209 05:43:42.940799 1404644 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1209 05:43:42.940831 1404644 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1209 05:43:42.940862 1404644 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1209 05:43:42.940866 1404644 kubeadm.go:319] 
	I1209 05:43:42.944449 1404644 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1209 05:43:42.944876 1404644 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1209 05:43:42.944989 1404644 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1209 05:43:42.945227 1404644 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1209 05:43:42.945235 1404644 kubeadm.go:319] 
	I1209 05:43:42.945305 1404644 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1209 05:43:42.945358 1404644 kubeadm.go:403] duration metric: took 8m7.791342576s to StartCluster
	I1209 05:43:42.945399 1404644 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:43:42.945466 1404644 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:43:42.969310 1404644 cri.go:89] found id: ""
	I1209 05:43:42.969335 1404644 logs.go:282] 0 containers: []
	W1209 05:43:42.969343 1404644 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:43:42.969349 1404644 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:43:42.969414 1404644 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:43:42.997525 1404644 cri.go:89] found id: ""
	I1209 05:43:42.997547 1404644 logs.go:282] 0 containers: []
	W1209 05:43:42.997556 1404644 logs.go:284] No container was found matching "etcd"
	I1209 05:43:42.997562 1404644 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:43:42.997619 1404644 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:43:43.022335 1404644 cri.go:89] found id: ""
	I1209 05:43:43.022360 1404644 logs.go:282] 0 containers: []
	W1209 05:43:43.022369 1404644 logs.go:284] No container was found matching "coredns"
	I1209 05:43:43.022380 1404644 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:43:43.022440 1404644 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:43:43.046700 1404644 cri.go:89] found id: ""
	I1209 05:43:43.046725 1404644 logs.go:282] 0 containers: []
	W1209 05:43:43.046734 1404644 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:43:43.046739 1404644 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:43:43.046797 1404644 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:43:43.071875 1404644 cri.go:89] found id: ""
	I1209 05:43:43.071906 1404644 logs.go:282] 0 containers: []
	W1209 05:43:43.071915 1404644 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:43:43.071921 1404644 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:43:43.071986 1404644 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:43:43.096153 1404644 cri.go:89] found id: ""
	I1209 05:43:43.096176 1404644 logs.go:282] 0 containers: []
	W1209 05:43:43.096190 1404644 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:43:43.096198 1404644 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:43:43.096259 1404644 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:43:43.121898 1404644 cri.go:89] found id: ""
	I1209 05:43:43.121922 1404644 logs.go:282] 0 containers: []
	W1209 05:43:43.121931 1404644 logs.go:284] No container was found matching "kindnet"
	I1209 05:43:43.121940 1404644 logs.go:123] Gathering logs for containerd ...
	I1209 05:43:43.121951 1404644 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:43:43.163306 1404644 logs.go:123] Gathering logs for container status ...
	I1209 05:43:43.163339 1404644 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:43:43.207532 1404644 logs.go:123] Gathering logs for kubelet ...
	I1209 05:43:43.207567 1404644 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:43:43.277243 1404644 logs.go:123] Gathering logs for dmesg ...
	I1209 05:43:43.277279 1404644 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:43:43.298477 1404644 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:43:43.298507 1404644 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:43:43.365347 1404644 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:43:43.357461    5461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:43:43.358090    5461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:43:43.359781    5461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:43:43.360115    5461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:43:43.361539    5461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:43:43.357461    5461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:43:43.358090    5461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:43:43.359781    5461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:43:43.360115    5461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:43:43.361539    5461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1209 05:43:43.365381 1404644 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001103315s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1209 05:43:43.365413 1404644 out.go:285] * 
	W1209 05:43:43.365475 1404644 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001103315s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1209 05:43:43.365493 1404644 out.go:285] * 
	W1209 05:43:43.367868 1404644 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 05:43:43.374302 1404644 out.go:203] 
	W1209 05:43:43.377044 1404644 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001103315s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1209 05:43:43.377091 1404644 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1209 05:43:43.377112 1404644 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1209 05:43:43.380387 1404644 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 09 05:35:22 no-preload-842269 containerd[758]: time="2025-12-09T05:35:22.036308692Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-scheduler:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 09 05:35:23 no-preload-842269 containerd[758]: time="2025-12-09T05:35:23.816614233Z" level=info msg="No images store for sha256:eb9020767c0d3bbd754f3f52cbe4c8bdd935dd5862604d6dc0b1f10422189544"
	Dec 09 05:35:23 no-preload-842269 containerd[758]: time="2025-12-09T05:35:23.819781105Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.0-beta.0\""
	Dec 09 05:35:23 no-preload-842269 containerd[758]: time="2025-12-09T05:35:23.827929881Z" level=info msg="ImageCreate event name:\"sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 09 05:35:23 no-preload-842269 containerd[758]: time="2025-12-09T05:35:23.828477017Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-proxy:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 09 05:35:25 no-preload-842269 containerd[758]: time="2025-12-09T05:35:25.333353027Z" level=info msg="No images store for sha256:5ed8f231f07481c657ad0e1d039921948e7abbc30ef6215465129012c4c4a508"
	Dec 09 05:35:25 no-preload-842269 containerd[758]: time="2025-12-09T05:35:25.336577825Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-beta.0\""
	Dec 09 05:35:25 no-preload-842269 containerd[758]: time="2025-12-09T05:35:25.345148784Z" level=info msg="ImageCreate event name:\"sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 09 05:35:25 no-preload-842269 containerd[758]: time="2025-12-09T05:35:25.345908041Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 09 05:35:27 no-preload-842269 containerd[758]: time="2025-12-09T05:35:27.120561243Z" level=info msg="No images store for sha256:64f3fb0a3392f487dbd4300c920f76dc3de2961e11fd6bfbedc75c0d25b1954c"
	Dec 09 05:35:27 no-preload-842269 containerd[758]: time="2025-12-09T05:35:27.122833000Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-beta.0\""
	Dec 09 05:35:27 no-preload-842269 containerd[758]: time="2025-12-09T05:35:27.131014710Z" level=info msg="ImageCreate event name:\"sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 09 05:35:27 no-preload-842269 containerd[758]: time="2025-12-09T05:35:27.132076917Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 09 05:35:28 no-preload-842269 containerd[758]: time="2025-12-09T05:35:28.575071692Z" level=info msg="No images store for sha256:5e4a4fe83792bf529a4e283e09069cf50cc9882d04168a33903ed6809a492e61"
	Dec 09 05:35:28 no-preload-842269 containerd[758]: time="2025-12-09T05:35:28.577744678Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\""
	Dec 09 05:35:28 no-preload-842269 containerd[758]: time="2025-12-09T05:35:28.588706439Z" level=info msg="ImageCreate event name:\"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 09 05:35:28 no-preload-842269 containerd[758]: time="2025-12-09T05:35:28.589547631Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 09 05:35:30 no-preload-842269 containerd[758]: time="2025-12-09T05:35:30.380855874Z" level=info msg="No images store for sha256:89a52ae86f116708cd5ba0d54dfbf2ae3011f126ee9161c4afb19bf2a51ef285"
	Dec 09 05:35:30 no-preload-842269 containerd[758]: time="2025-12-09T05:35:30.384556393Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\""
	Dec 09 05:35:30 no-preload-842269 containerd[758]: time="2025-12-09T05:35:30.398357527Z" level=info msg="ImageCreate event name:\"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 09 05:35:30 no-preload-842269 containerd[758]: time="2025-12-09T05:35:30.402958452Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 09 05:35:30 no-preload-842269 containerd[758]: time="2025-12-09T05:35:30.951034968Z" level=info msg="No images store for sha256:7475c7d18769df89a804d5bebf679dbf94886f3626f07a2be923beaa0cc7e5b0"
	Dec 09 05:35:30 no-preload-842269 containerd[758]: time="2025-12-09T05:35:30.953249078Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\""
	Dec 09 05:35:30 no-preload-842269 containerd[758]: time="2025-12-09T05:35:30.967195965Z" level=info msg="ImageCreate event name:\"sha256:66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 09 05:35:30 no-preload-842269 containerd[758]: time="2025-12-09T05:35:30.967501105Z" level=info msg="ImageUpdate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:43:47.567149    5827 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:43:47.567855    5827 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:43:47.569386    5827 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:43:47.569663    5827 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:43:47.571111    5827 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +25.904254] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:14] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:16] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:18] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:19] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:30] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:32] overlayfs: idmapped layers are currently not supported
	[ +28.114653] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:33] overlayfs: idmapped layers are currently not supported
	[ +23.720849] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:34] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:35] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:36] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:37] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:38] overlayfs: idmapped layers are currently not supported
	[ +23.656275] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:39] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:57] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:58] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:00] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:02] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:03] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:05] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:06] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 9 05:31] overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	
	
	==> kernel <==
	 05:43:47 up  8:25,  0 user,  load average: 0.40, 1.34, 1.80
	Linux no-preload-842269 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 09 05:43:44 no-preload-842269 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 09 05:43:44 no-preload-842269 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Dec 09 05:43:44 no-preload-842269 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 05:43:44 no-preload-842269 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 05:43:44 no-preload-842269 kubelet[5577]: E1209 05:43:44.788247    5577 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 09 05:43:44 no-preload-842269 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 09 05:43:44 no-preload-842269 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 09 05:43:45 no-preload-842269 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 323.
	Dec 09 05:43:45 no-preload-842269 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 05:43:45 no-preload-842269 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 05:43:45 no-preload-842269 kubelet[5610]: E1209 05:43:45.503710    5610 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 09 05:43:45 no-preload-842269 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 09 05:43:45 no-preload-842269 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 09 05:43:46 no-preload-842269 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 324.
	Dec 09 05:43:46 no-preload-842269 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 05:43:46 no-preload-842269 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 05:43:46 no-preload-842269 kubelet[5707]: E1209 05:43:46.271815    5707 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 09 05:43:46 no-preload-842269 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 09 05:43:46 no-preload-842269 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 09 05:43:46 no-preload-842269 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 325.
	Dec 09 05:43:46 no-preload-842269 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 05:43:46 no-preload-842269 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 05:43:47 no-preload-842269 kubelet[5745]: E1209 05:43:47.042126    5745 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 09 05:43:47 no-preload-842269 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 09 05:43:47 no-preload-842269 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-842269 -n no-preload-842269
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-842269 -n no-preload-842269: exit status 6 (368.845892ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1209 05:43:48.064816 1427702 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-842269" does not appear in /home/jenkins/minikube-integration/22081-1142328/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "no-preload-842269" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (3.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (89.69s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-842269 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1209 05:43:54.634782 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/old-k8s-version-384009/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 05:44:09.296526 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/default-k8s-diff-port-564611/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 05:44:09.302953 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/default-k8s-diff-port-564611/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 05:44:09.314385 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/default-k8s-diff-port-564611/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 05:44:09.335861 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/default-k8s-diff-port-564611/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 05:44:09.377356 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/default-k8s-diff-port-564611/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 05:44:09.458855 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/default-k8s-diff-port-564611/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 05:44:09.620327 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/default-k8s-diff-port-564611/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 05:44:09.942105 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/default-k8s-diff-port-564611/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 05:44:10.584196 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/default-k8s-diff-port-564611/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 05:44:11.865878 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/default-k8s-diff-port-564611/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 05:44:14.428165 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/default-k8s-diff-port-564611/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 05:44:19.550132 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/default-k8s-diff-port-564611/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 05:44:29.792305 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/default-k8s-diff-port-564611/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 05:44:50.273683 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/default-k8s-diff-port-564611/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-842269 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m28.196172615s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p no-preload-842269 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-842269 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-842269 describe deploy/metrics-server -n kube-system: exit status 1 (53.601799ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-842269" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-842269 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-842269
helpers_test.go:243: (dbg) docker inspect no-preload-842269:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9789b34a5453b154c4ceca4f0038c1d7948d7f0f72f334a114a8452d803ad415",
	        "Created": "2025-12-09T05:35:10.617601088Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1404960,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-09T05:35:10.694361506Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:e4eb91ed18a24161fce60c7cdd660144ecd5b8c5029dc2dea2c5e423c2f48ce4",
	        "ResolvConfPath": "/var/lib/docker/containers/9789b34a5453b154c4ceca4f0038c1d7948d7f0f72f334a114a8452d803ad415/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9789b34a5453b154c4ceca4f0038c1d7948d7f0f72f334a114a8452d803ad415/hostname",
	        "HostsPath": "/var/lib/docker/containers/9789b34a5453b154c4ceca4f0038c1d7948d7f0f72f334a114a8452d803ad415/hosts",
	        "LogPath": "/var/lib/docker/containers/9789b34a5453b154c4ceca4f0038c1d7948d7f0f72f334a114a8452d803ad415/9789b34a5453b154c4ceca4f0038c1d7948d7f0f72f334a114a8452d803ad415-json.log",
	        "Name": "/no-preload-842269",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-842269:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-842269",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "9789b34a5453b154c4ceca4f0038c1d7948d7f0f72f334a114a8452d803ad415",
	                "LowerDir": "/var/lib/docker/overlay2/658f95b1f330fcb0d4774e9611c50218d409d0a889583e0050325b4fe479e9f7-init/diff:/var/lib/docker/overlay2/c44bb57aa59cc265266f37f2bb6e7ec0e7d641c3b4aeaa57e6d23deec6f0d1d4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/658f95b1f330fcb0d4774e9611c50218d409d0a889583e0050325b4fe479e9f7/merged",
	                "UpperDir": "/var/lib/docker/overlay2/658f95b1f330fcb0d4774e9611c50218d409d0a889583e0050325b4fe479e9f7/diff",
	                "WorkDir": "/var/lib/docker/overlay2/658f95b1f330fcb0d4774e9611c50218d409d0a889583e0050325b4fe479e9f7/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-842269",
	                "Source": "/var/lib/docker/volumes/no-preload-842269/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-842269",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-842269",
	                "name.minikube.sigs.k8s.io": "no-preload-842269",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c8d638bf0ac3f8de516cba00d80a3b149af62367900ced69943b89e3e7924db8",
	            "SandboxKey": "/var/run/docker/netns/c8d638bf0ac3",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34185"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34186"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34189"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34187"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34188"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-842269": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "4e:5c:05:82:25:f0",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6461bd7226e5723487f325bf78054dc63f1dafa2831abe7b44a8cc288dfa4456",
	                    "EndpointID": "5bccd85f7c02ee9bc4903397b85755d423fd035b5d120846d74ca8550b415301",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-842269",
	                        "9789b34a5453"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-842269 -n no-preload-842269
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-842269 -n no-preload-842269: exit status 6 (314.482043ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1209 05:45:16.646280 1429340 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-842269" does not appear in /home/jenkins/minikube-integration/22081-1142328/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-842269 logs -n 25
helpers_test.go:260: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─
────────────────────┐
	│ COMMAND │                                                                                                                            ARGS                                                                                                                            │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─
────────────────────┤
	│ start   │ -p cert-expiration-074045 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd                                                                                                                                            │ cert-expiration-074045       │ jenkins │ v1.37.0 │ 09 Dec 25 05:34 UTC │ 09 Dec 25 05:35 UTC │
	│ delete  │ -p cert-expiration-074045                                                                                                                                                                                                                                  │ cert-expiration-074045       │ jenkins │ v1.37.0 │ 09 Dec 25 05:35 UTC │ 09 Dec 25 05:35 UTC │
	│ delete  │ -p disable-driver-mounts-094940                                                                                                                                                                                                                            │ disable-driver-mounts-094940 │ jenkins │ v1.37.0 │ 09 Dec 25 05:35 UTC │ 09 Dec 25 05:35 UTC │
	│ start   │ -p no-preload-842269 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-842269            │ jenkins │ v1.37.0 │ 09 Dec 25 05:35 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-432108 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                   │ embed-certs-432108           │ jenkins │ v1.37.0 │ 09 Dec 25 05:36 UTC │ 09 Dec 25 05:36 UTC │
	│ stop    │ -p embed-certs-432108 --alsologtostderr -v=3                                                                                                                                                                                                               │ embed-certs-432108           │ jenkins │ v1.37.0 │ 09 Dec 25 05:36 UTC │ 09 Dec 25 05:36 UTC │
	│ addons  │ enable dashboard -p embed-certs-432108 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                              │ embed-certs-432108           │ jenkins │ v1.37.0 │ 09 Dec 25 05:36 UTC │ 09 Dec 25 05:36 UTC │
	│ start   │ -p embed-certs-432108 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-432108           │ jenkins │ v1.37.0 │ 09 Dec 25 05:36 UTC │ 09 Dec 25 05:37 UTC │
	│ image   │ embed-certs-432108 image list --format=json                                                                                                                                                                                                                │ embed-certs-432108           │ jenkins │ v1.37.0 │ 09 Dec 25 05:37 UTC │ 09 Dec 25 05:37 UTC │
	│ pause   │ -p embed-certs-432108 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-432108           │ jenkins │ v1.37.0 │ 09 Dec 25 05:37 UTC │ 09 Dec 25 05:37 UTC │
	│ unpause │ -p embed-certs-432108 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-432108           │ jenkins │ v1.37.0 │ 09 Dec 25 05:37 UTC │ 09 Dec 25 05:37 UTC │
	│ delete  │ -p embed-certs-432108                                                                                                                                                                                                                                      │ embed-certs-432108           │ jenkins │ v1.37.0 │ 09 Dec 25 05:37 UTC │ 09 Dec 25 05:37 UTC │
	│ delete  │ -p embed-certs-432108                                                                                                                                                                                                                                      │ embed-certs-432108           │ jenkins │ v1.37.0 │ 09 Dec 25 05:37 UTC │ 09 Dec 25 05:37 UTC │
	│ start   │ -p default-k8s-diff-port-564611 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-564611 │ jenkins │ v1.37.0 │ 09 Dec 25 05:37 UTC │ 09 Dec 25 05:39 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-564611 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                         │ default-k8s-diff-port-564611 │ jenkins │ v1.37.0 │ 09 Dec 25 05:39 UTC │ 09 Dec 25 05:39 UTC │
	│ stop    │ -p default-k8s-diff-port-564611 --alsologtostderr -v=3                                                                                                                                                                                                     │ default-k8s-diff-port-564611 │ jenkins │ v1.37.0 │ 09 Dec 25 05:39 UTC │ 09 Dec 25 05:39 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-564611 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ default-k8s-diff-port-564611 │ jenkins │ v1.37.0 │ 09 Dec 25 05:39 UTC │ 09 Dec 25 05:39 UTC │
	│ start   │ -p default-k8s-diff-port-564611 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-564611 │ jenkins │ v1.37.0 │ 09 Dec 25 05:39 UTC │ 09 Dec 25 05:40 UTC │
	│ image   │ default-k8s-diff-port-564611 image list --format=json                                                                                                                                                                                                      │ default-k8s-diff-port-564611 │ jenkins │ v1.37.0 │ 09 Dec 25 05:40 UTC │ 09 Dec 25 05:40 UTC │
	│ pause   │ -p default-k8s-diff-port-564611 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-564611 │ jenkins │ v1.37.0 │ 09 Dec 25 05:40 UTC │ 09 Dec 25 05:40 UTC │
	│ unpause │ -p default-k8s-diff-port-564611 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-564611 │ jenkins │ v1.37.0 │ 09 Dec 25 05:40 UTC │ 09 Dec 25 05:40 UTC │
	│ delete  │ -p default-k8s-diff-port-564611                                                                                                                                                                                                                            │ default-k8s-diff-port-564611 │ jenkins │ v1.37.0 │ 09 Dec 25 05:40 UTC │ 09 Dec 25 05:40 UTC │
	│ delete  │ -p default-k8s-diff-port-564611                                                                                                                                                                                                                            │ default-k8s-diff-port-564611 │ jenkins │ v1.37.0 │ 09 Dec 25 05:40 UTC │ 09 Dec 25 05:40 UTC │
	│ start   │ -p newest-cni-262540 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-262540            │ jenkins │ v1.37.0 │ 09 Dec 25 05:40 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-842269 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ no-preload-842269            │ jenkins │ v1.37.0 │ 09 Dec 25 05:43 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─
────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/09 05:40:41
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 05:40:41.014166 1422398 out.go:360] Setting OutFile to fd 1 ...
	I1209 05:40:41.014346 1422398 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 05:40:41.014376 1422398 out.go:374] Setting ErrFile to fd 2...
	I1209 05:40:41.014403 1422398 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 05:40:41.014777 1422398 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-1142328/.minikube/bin
	I1209 05:40:41.015346 1422398 out.go:368] Setting JSON to false
	I1209 05:40:41.016651 1422398 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":30164,"bootTime":1765228677,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1209 05:40:41.016752 1422398 start.go:143] virtualization:  
	I1209 05:40:41.020737 1422398 out.go:179] * [newest-cni-262540] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1209 05:40:41.025100 1422398 out.go:179]   - MINIKUBE_LOCATION=22081
	I1209 05:40:41.025177 1422398 notify.go:221] Checking for updates...
	I1209 05:40:41.031377 1422398 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 05:40:41.034527 1422398 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22081-1142328/kubeconfig
	I1209 05:40:41.037660 1422398 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-1142328/.minikube
	I1209 05:40:41.040646 1422398 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1209 05:40:41.043555 1422398 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 05:40:41.047098 1422398 config.go:182] Loaded profile config "no-preload-842269": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1209 05:40:41.047203 1422398 driver.go:422] Setting default libvirt URI to qemu:///system
	I1209 05:40:41.082759 1422398 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1209 05:40:41.082877 1422398 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 05:40:41.141221 1422398 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-09 05:40:41.131267754 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1209 05:40:41.141331 1422398 docker.go:319] overlay module found
	I1209 05:40:41.144673 1422398 out.go:179] * Using the docker driver based on user configuration
	I1209 05:40:41.147595 1422398 start.go:309] selected driver: docker
	I1209 05:40:41.147618 1422398 start.go:927] validating driver "docker" against <nil>
	I1209 05:40:41.147633 1422398 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 05:40:41.148480 1422398 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 05:40:41.205051 1422398 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-09 05:40:41.195808894 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1209 05:40:41.205216 1422398 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1209 05:40:41.205249 1422398 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1209 05:40:41.205488 1422398 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1209 05:40:41.208233 1422398 out.go:179] * Using Docker driver with root privileges
	I1209 05:40:41.211172 1422398 cni.go:84] Creating CNI manager for ""
	I1209 05:40:41.211250 1422398 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1209 05:40:41.211263 1422398 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1209 05:40:41.211347 1422398 start.go:353] cluster config:
	{Name:newest-cni-262540 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-262540 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 05:40:41.214410 1422398 out.go:179] * Starting "newest-cni-262540" primary control-plane node in "newest-cni-262540" cluster
	I1209 05:40:41.217388 1422398 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1209 05:40:41.220416 1422398 out.go:179] * Pulling base image v0.0.48-1765184860-22066 ...
	I1209 05:40:41.223240 1422398 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1209 05:40:41.223288 1422398 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1209 05:40:41.223310 1422398 cache.go:65] Caching tarball of preloaded images
	I1209 05:40:41.223322 1422398 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local docker daemon
	I1209 05:40:41.223405 1422398 preload.go:238] Found /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1209 05:40:41.223416 1422398 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1209 05:40:41.223520 1422398 profile.go:143] Saving config to /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/config.json ...
	I1209 05:40:41.223546 1422398 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/config.json: {Name:mk3f2f0447b25b9c02ca47937d45ed297c23b284 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 05:40:41.242533 1422398 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local docker daemon, skipping pull
	I1209 05:40:41.242556 1422398 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c exists in daemon, skipping load
	I1209 05:40:41.242574 1422398 cache.go:243] Successfully downloaded all kic artifacts
	I1209 05:40:41.242607 1422398 start.go:360] acquireMachinesLock for newest-cni-262540: {Name:mk272d84ff1bc8c8949f2f0b1f608a7519899d10 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 05:40:41.242722 1422398 start.go:364] duration metric: took 94.012µs to acquireMachinesLock for "newest-cni-262540"
	I1209 05:40:41.242752 1422398 start.go:93] Provisioning new machine with config: &{Name:newest-cni-262540 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-262540 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1209 05:40:41.242832 1422398 start.go:125] createHost starting for "" (driver="docker")
	I1209 05:40:41.246278 1422398 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1209 05:40:41.246513 1422398 start.go:159] libmachine.API.Create for "newest-cni-262540" (driver="docker")
	I1209 05:40:41.246549 1422398 client.go:173] LocalClient.Create starting
	I1209 05:40:41.246618 1422398 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem
	I1209 05:40:41.246653 1422398 main.go:143] libmachine: Decoding PEM data...
	I1209 05:40:41.246672 1422398 main.go:143] libmachine: Parsing certificate...
	I1209 05:40:41.246730 1422398 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/cert.pem
	I1209 05:40:41.246753 1422398 main.go:143] libmachine: Decoding PEM data...
	I1209 05:40:41.246765 1422398 main.go:143] libmachine: Parsing certificate...
	I1209 05:40:41.247138 1422398 cli_runner.go:164] Run: docker network inspect newest-cni-262540 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1209 05:40:41.262988 1422398 cli_runner.go:211] docker network inspect newest-cni-262540 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1209 05:40:41.263073 1422398 network_create.go:284] running [docker network inspect newest-cni-262540] to gather additional debugging logs...
	I1209 05:40:41.263095 1422398 cli_runner.go:164] Run: docker network inspect newest-cni-262540
	W1209 05:40:41.279120 1422398 cli_runner.go:211] docker network inspect newest-cni-262540 returned with exit code 1
	I1209 05:40:41.279154 1422398 network_create.go:287] error running [docker network inspect newest-cni-262540]: docker network inspect newest-cni-262540: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-262540 not found
	I1209 05:40:41.279168 1422398 network_create.go:289] output of [docker network inspect newest-cni-262540]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-262540 not found
	
	** /stderr **
	I1209 05:40:41.279286 1422398 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1209 05:40:41.295748 1422398 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-7a15eec16b1a IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:8a:b7:58:bc:12:6c} reservation:<nil>}
	I1209 05:40:41.296192 1422398 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-fcb9e6b38e8e IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:56:c3:7a:b4:06:4b} reservation:<nil>}
	I1209 05:40:41.296445 1422398 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-8c1346c67d6b IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:82:10:14:75:55:fb} reservation:<nil>}
	I1209 05:40:41.296875 1422398 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019e80f0}
	I1209 05:40:41.296895 1422398 network_create.go:124] attempt to create docker network newest-cni-262540 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1209 05:40:41.296949 1422398 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-262540 newest-cni-262540
	I1209 05:40:41.356493 1422398 network_create.go:108] docker network newest-cni-262540 192.168.76.0/24 created
	I1209 05:40:41.356525 1422398 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-262540" container
	I1209 05:40:41.356609 1422398 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1209 05:40:41.372493 1422398 cli_runner.go:164] Run: docker volume create newest-cni-262540 --label name.minikube.sigs.k8s.io=newest-cni-262540 --label created_by.minikube.sigs.k8s.io=true
	I1209 05:40:41.390479 1422398 oci.go:103] Successfully created a docker volume newest-cni-262540
	I1209 05:40:41.390571 1422398 cli_runner.go:164] Run: docker run --rm --name newest-cni-262540-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-262540 --entrypoint /usr/bin/test -v newest-cni-262540:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c -d /var/lib
	I1209 05:40:41.957365 1422398 oci.go:107] Successfully prepared a docker volume newest-cni-262540
	I1209 05:40:41.957440 1422398 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1209 05:40:41.957454 1422398 kic.go:194] Starting extracting preloaded images to volume ...
	I1209 05:40:41.957523 1422398 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-262540:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c -I lz4 -xf /preloaded.tar -C /extractDir
	I1209 05:40:46.577478 1422398 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-262540:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c -I lz4 -xf /preloaded.tar -C /extractDir: (4.619919939s)
	I1209 05:40:46.577511 1422398 kic.go:203] duration metric: took 4.620053703s to extract preloaded images to volume ...
	W1209 05:40:46.577655 1422398 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1209 05:40:46.577765 1422398 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1209 05:40:46.641962 1422398 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-262540 --name newest-cni-262540 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-262540 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-262540 --network newest-cni-262540 --ip 192.168.76.2 --volume newest-cni-262540:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c
	I1209 05:40:46.963179 1422398 cli_runner.go:164] Run: docker container inspect newest-cni-262540 --format={{.State.Running}}
	I1209 05:40:46.990367 1422398 cli_runner.go:164] Run: docker container inspect newest-cni-262540 --format={{.State.Status}}
	I1209 05:40:47.023042 1422398 cli_runner.go:164] Run: docker exec newest-cni-262540 stat /var/lib/dpkg/alternatives/iptables
	I1209 05:40:47.074649 1422398 oci.go:144] the created container "newest-cni-262540" has a running status.
	I1209 05:40:47.074676 1422398 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22081-1142328/.minikube/machines/newest-cni-262540/id_rsa...
	I1209 05:40:47.692225 1422398 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22081-1142328/.minikube/machines/newest-cni-262540/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1209 05:40:47.718517 1422398 cli_runner.go:164] Run: docker container inspect newest-cni-262540 --format={{.State.Status}}
	I1209 05:40:47.740875 1422398 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1209 05:40:47.740894 1422398 kic_runner.go:114] Args: [docker exec --privileged newest-cni-262540 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1209 05:40:47.780644 1422398 cli_runner.go:164] Run: docker container inspect newest-cni-262540 --format={{.State.Status}}
	I1209 05:40:47.797898 1422398 machine.go:94] provisionDockerMachine start ...
	I1209 05:40:47.798001 1422398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-262540
	I1209 05:40:47.813940 1422398 main.go:143] libmachine: Using SSH client type: native
	I1209 05:40:47.814280 1422398 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 34205 <nil> <nil>}
	I1209 05:40:47.814295 1422398 main.go:143] libmachine: About to run SSH command:
	hostname
	I1209 05:40:47.814927 1422398 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1209 05:40:50.967418 1422398 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-262540
	
	I1209 05:40:50.967444 1422398 ubuntu.go:182] provisioning hostname "newest-cni-262540"
	I1209 05:40:50.967507 1422398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-262540
	I1209 05:40:50.983898 1422398 main.go:143] libmachine: Using SSH client type: native
	I1209 05:40:50.984244 1422398 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 34205 <nil> <nil>}
	I1209 05:40:50.984261 1422398 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-262540 && echo "newest-cni-262540" | sudo tee /etc/hostname
	I1209 05:40:51.158163 1422398 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-262540
	
	I1209 05:40:51.158329 1422398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-262540
	I1209 05:40:51.176198 1422398 main.go:143] libmachine: Using SSH client type: native
	I1209 05:40:51.176519 1422398 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 34205 <nil> <nil>}
	I1209 05:40:51.176535 1422398 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-262540' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-262540/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-262540' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 05:40:51.328246 1422398 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1209 05:40:51.328276 1422398 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22081-1142328/.minikube CaCertPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22081-1142328/.minikube}
	I1209 05:40:51.328348 1422398 ubuntu.go:190] setting up certificates
	I1209 05:40:51.328357 1422398 provision.go:84] configureAuth start
	I1209 05:40:51.328443 1422398 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-262540
	I1209 05:40:51.345620 1422398 provision.go:143] copyHostCerts
	I1209 05:40:51.345692 1422398 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-1142328/.minikube/key.pem, removing ...
	I1209 05:40:51.345702 1422398 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-1142328/.minikube/key.pem
	I1209 05:40:51.345782 1422398 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22081-1142328/.minikube/key.pem (1675 bytes)
	I1209 05:40:51.345892 1422398 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.pem, removing ...
	I1209 05:40:51.345903 1422398 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.pem
	I1209 05:40:51.345937 1422398 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.pem (1078 bytes)
	I1209 05:40:51.345995 1422398 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-1142328/.minikube/cert.pem, removing ...
	I1209 05:40:51.346004 1422398 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-1142328/.minikube/cert.pem
	I1209 05:40:51.346028 1422398 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22081-1142328/.minikube/cert.pem (1123 bytes)
	I1209 05:40:51.346078 1422398 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca-key.pem org=jenkins.newest-cni-262540 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-262540]
	I1209 05:40:51.459612 1422398 provision.go:177] copyRemoteCerts
	I1209 05:40:51.459736 1422398 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 05:40:51.459804 1422398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-262540
	I1209 05:40:51.477068 1422398 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34205 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/newest-cni-262540/id_rsa Username:docker}
	I1209 05:40:51.583430 1422398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1209 05:40:51.599930 1422398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1209 05:40:51.616188 1422398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1209 05:40:51.632654 1422398 provision.go:87] duration metric: took 304.27698ms to configureAuth
	I1209 05:40:51.632690 1422398 ubuntu.go:206] setting minikube options for container-runtime
	I1209 05:40:51.632889 1422398 config.go:182] Loaded profile config "newest-cni-262540": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1209 05:40:51.632903 1422398 machine.go:97] duration metric: took 3.834981835s to provisionDockerMachine
	I1209 05:40:51.632910 1422398 client.go:176] duration metric: took 10.386351456s to LocalClient.Create
	I1209 05:40:51.632935 1422398 start.go:167] duration metric: took 10.386419491s to libmachine.API.Create "newest-cni-262540"
	I1209 05:40:51.632946 1422398 start.go:293] postStartSetup for "newest-cni-262540" (driver="docker")
	I1209 05:40:51.632957 1422398 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 05:40:51.633024 1422398 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 05:40:51.633069 1422398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-262540
	I1209 05:40:51.648788 1422398 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34205 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/newest-cni-262540/id_rsa Username:docker}
	I1209 05:40:51.751770 1422398 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 05:40:51.754890 1422398 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1209 05:40:51.754915 1422398 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1209 05:40:51.754931 1422398 filesync.go:126] Scanning /home/jenkins/minikube-integration/22081-1142328/.minikube/addons for local assets ...
	I1209 05:40:51.754996 1422398 filesync.go:126] Scanning /home/jenkins/minikube-integration/22081-1142328/.minikube/files for local assets ...
	I1209 05:40:51.755088 1422398 filesync.go:149] local asset: /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/ssl/certs/11442312.pem -> 11442312.pem in /etc/ssl/certs
	I1209 05:40:51.755194 1422398 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 05:40:51.762311 1422398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/ssl/certs/11442312.pem --> /etc/ssl/certs/11442312.pem (1708 bytes)
	I1209 05:40:51.778994 1422398 start.go:296] duration metric: took 146.033857ms for postStartSetup
	I1209 05:40:51.779431 1422398 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-262540
	I1209 05:40:51.798065 1422398 profile.go:143] Saving config to /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/config.json ...
	I1209 05:40:51.798353 1422398 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1209 05:40:51.798402 1422398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-262540
	I1209 05:40:51.814583 1422398 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34205 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/newest-cni-262540/id_rsa Username:docker}
	I1209 05:40:51.917312 1422398 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1209 05:40:51.922304 1422398 start.go:128] duration metric: took 10.679457533s to createHost
	I1209 05:40:51.922328 1422398 start.go:83] releasing machines lock for "newest-cni-262540", held for 10.67959362s
	I1209 05:40:51.922409 1422398 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-262540
	I1209 05:40:51.939569 1422398 ssh_runner.go:195] Run: cat /version.json
	I1209 05:40:51.939636 1422398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-262540
	I1209 05:40:51.939638 1422398 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 05:40:51.939698 1422398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-262540
	I1209 05:40:51.960875 1422398 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34205 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/newest-cni-262540/id_rsa Username:docker}
	I1209 05:40:51.963453 1422398 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34205 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/newest-cni-262540/id_rsa Username:docker}
	I1209 05:40:52.063736 1422398 ssh_runner.go:195] Run: systemctl --version
	I1209 05:40:52.156351 1422398 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 05:40:52.160600 1422398 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 05:40:52.160672 1422398 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 05:40:52.187388 1422398 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1209 05:40:52.187415 1422398 start.go:496] detecting cgroup driver to use...
	I1209 05:40:52.187446 1422398 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1209 05:40:52.187504 1422398 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1209 05:40:52.203080 1422398 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1209 05:40:52.215843 1422398 docker.go:218] disabling cri-docker service (if available) ...
	I1209 05:40:52.215908 1422398 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 05:40:52.232148 1422398 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 05:40:52.250032 1422398 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 05:40:52.358548 1422398 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 05:40:52.481614 1422398 docker.go:234] disabling docker service ...
	I1209 05:40:52.481725 1422398 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 05:40:52.502779 1422398 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 05:40:52.515525 1422398 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 05:40:52.630357 1422398 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 05:40:52.754667 1422398 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 05:40:52.769286 1422398 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 05:40:52.785364 1422398 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1209 05:40:52.794252 1422398 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1209 05:40:52.803528 1422398 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1209 05:40:52.803619 1422398 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1209 05:40:52.812544 1422398 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1209 05:40:52.820837 1422398 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1209 05:40:52.829672 1422398 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1209 05:40:52.838554 1422398 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 05:40:52.846308 1422398 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1209 05:40:52.854529 1422398 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1209 05:40:52.863150 1422398 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1209 05:40:52.871579 1422398 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 05:40:52.878758 1422398 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 05:40:52.886006 1422398 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 05:40:53.012110 1422398 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1209 05:40:53.145258 1422398 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1209 05:40:53.145356 1422398 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1209 05:40:53.148998 1422398 start.go:564] Will wait 60s for crictl version
	I1209 05:40:53.149063 1422398 ssh_runner.go:195] Run: which crictl
	I1209 05:40:53.152446 1422398 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1209 05:40:53.177386 1422398 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1209 05:40:53.177452 1422398 ssh_runner.go:195] Run: containerd --version
	I1209 05:40:53.199507 1422398 ssh_runner.go:195] Run: containerd --version
	I1209 05:40:53.225320 1422398 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1209 05:40:53.228305 1422398 cli_runner.go:164] Run: docker network inspect newest-cni-262540 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1209 05:40:53.243962 1422398 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1209 05:40:53.247757 1422398 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 05:40:53.260215 1422398 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1209 05:40:53.262990 1422398 kubeadm.go:884] updating cluster {Name:newest-cni-262540 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-262540 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 05:40:53.263149 1422398 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1209 05:40:53.263229 1422398 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 05:40:53.289432 1422398 containerd.go:627] all images are preloaded for containerd runtime.
	I1209 05:40:53.289455 1422398 containerd.go:534] Images already preloaded, skipping extraction
	I1209 05:40:53.289546 1422398 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 05:40:53.312520 1422398 containerd.go:627] all images are preloaded for containerd runtime.
	I1209 05:40:53.312544 1422398 cache_images.go:86] Images are preloaded, skipping loading
	I1209 05:40:53.312552 1422398 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1209 05:40:53.312646 1422398 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-262540 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-262540 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 05:40:53.312713 1422398 ssh_runner.go:195] Run: sudo crictl info
	I1209 05:40:53.337527 1422398 cni.go:84] Creating CNI manager for ""
	I1209 05:40:53.337552 1422398 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1209 05:40:53.337571 1422398 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1209 05:40:53.337595 1422398 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-262540 NodeName:newest-cni-262540 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1209 05:40:53.337729 1422398 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-262540"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 05:40:53.337802 1422398 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1209 05:40:53.345447 1422398 binaries.go:51] Found k8s binaries, skipping transfer
	I1209 05:40:53.345517 1422398 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1209 05:40:53.352930 1422398 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1209 05:40:53.365409 1422398 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1209 05:40:53.377954 1422398 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1209 05:40:53.391187 1422398 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1209 05:40:53.394878 1422398 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 05:40:53.404484 1422398 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 05:40:53.509615 1422398 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 05:40:53.532992 1422398 certs.go:69] Setting up /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540 for IP: 192.168.76.2
	I1209 05:40:53.533014 1422398 certs.go:195] generating shared ca certs ...
	I1209 05:40:53.533065 1422398 certs.go:227] acquiring lock for ca certs: {Name:mk15788702f8c4e23b5aeab3f44961d296fab259 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 05:40:53.533239 1422398 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.key
	I1209 05:40:53.533305 1422398 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/proxy-client-ca.key
	I1209 05:40:53.533322 1422398 certs.go:257] generating profile certs ...
	I1209 05:40:53.533397 1422398 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/client.key
	I1209 05:40:53.533414 1422398 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/client.crt with IP's: []
	I1209 05:40:53.604706 1422398 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/client.crt ...
	I1209 05:40:53.604742 1422398 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/client.crt: {Name:mk908e1c63967383d20a56065c79b4bc0877c829 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 05:40:53.604954 1422398 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/client.key ...
	I1209 05:40:53.604968 1422398 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/client.key: {Name:mk0782d8c9cde6107bc905e7c1ffdb2b8a8e707c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 05:40:53.605064 1422398 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/apiserver.key.0ed49b31
	I1209 05:40:53.605085 1422398 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/apiserver.crt.0ed49b31 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1209 05:40:53.850901 1422398 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/apiserver.crt.0ed49b31 ...
	I1209 05:40:53.850943 1422398 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/apiserver.crt.0ed49b31: {Name:mkd1e6249eaef6a320629a45c3aa63c6b2fe9252 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 05:40:53.851131 1422398 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/apiserver.key.0ed49b31 ...
	I1209 05:40:53.851147 1422398 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/apiserver.key.0ed49b31: {Name:mk9df2970f8e62123fc8a73f846dec85a46dbe82 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 05:40:53.851239 1422398 certs.go:382] copying /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/apiserver.crt.0ed49b31 -> /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/apiserver.crt
	I1209 05:40:53.851366 1422398 certs.go:386] copying /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/apiserver.key.0ed49b31 -> /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/apiserver.key
	I1209 05:40:53.851432 1422398 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/proxy-client.key
	I1209 05:40:53.851456 1422398 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/proxy-client.crt with IP's: []
	I1209 05:40:54.332232 1422398 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/proxy-client.crt ...
	I1209 05:40:54.332268 1422398 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/proxy-client.crt: {Name:mk86c5c1261e1f4a7a13e3996ae202e7dfe017ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 05:40:54.332465 1422398 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/proxy-client.key ...
	I1209 05:40:54.332479 1422398 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/proxy-client.key: {Name:mk2b143aa140867219200e00888917dfd6928724 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 05:40:54.332672 1422398 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/1144231.pem (1338 bytes)
	W1209 05:40:54.332718 1422398 certs.go:480] ignoring /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/1144231_empty.pem, impossibly tiny 0 bytes
	I1209 05:40:54.332732 1422398 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca-key.pem (1679 bytes)
	I1209 05:40:54.332759 1422398 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem (1078 bytes)
	I1209 05:40:54.332787 1422398 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/cert.pem (1123 bytes)
	I1209 05:40:54.332816 1422398 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/key.pem (1675 bytes)
	I1209 05:40:54.332865 1422398 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/ssl/certs/11442312.pem (1708 bytes)
	I1209 05:40:54.333451 1422398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 05:40:54.351622 1422398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1209 05:40:54.369353 1422398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 05:40:54.386962 1422398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1209 05:40:54.405322 1422398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1209 05:40:54.422647 1422398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1209 05:40:54.483231 1422398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 05:40:54.515176 1422398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1209 05:40:54.533753 1422398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/ssl/certs/11442312.pem --> /usr/share/ca-certificates/11442312.pem (1708 bytes)
	I1209 05:40:54.552730 1422398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 05:40:54.570021 1422398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/1144231.pem --> /usr/share/ca-certificates/1144231.pem (1338 bytes)
	I1209 05:40:54.587455 1422398 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 05:40:54.600371 1422398 ssh_runner.go:195] Run: openssl version
	I1209 05:40:54.606642 1422398 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/11442312.pem
	I1209 05:40:54.613904 1422398 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/11442312.pem /etc/ssl/certs/11442312.pem
	I1209 05:40:54.621395 1422398 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11442312.pem
	I1209 05:40:54.624932 1422398 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 04:18 /usr/share/ca-certificates/11442312.pem
	I1209 05:40:54.625005 1422398 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11442312.pem
	I1209 05:40:54.665847 1422398 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1209 05:40:54.673355 1422398 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/11442312.pem /etc/ssl/certs/3ec20f2e.0
	I1209 05:40:54.680386 1422398 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1209 05:40:54.687518 1422398 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1209 05:40:54.694760 1422398 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 05:40:54.698200 1422398 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 04:09 /usr/share/ca-certificates/minikubeCA.pem
	I1209 05:40:54.698275 1422398 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 05:40:54.739105 1422398 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1209 05:40:54.746468 1422398 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1209 05:40:54.753754 1422398 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1144231.pem
	I1209 05:40:54.761267 1422398 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1144231.pem /etc/ssl/certs/1144231.pem
	I1209 05:40:54.768631 1422398 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1144231.pem
	I1209 05:40:54.772107 1422398 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 04:18 /usr/share/ca-certificates/1144231.pem
	I1209 05:40:54.772200 1422398 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1144231.pem
	I1209 05:40:54.812987 1422398 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1209 05:40:54.820239 1422398 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/1144231.pem /etc/ssl/certs/51391683.0
	I1209 05:40:54.827466 1422398 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 05:40:54.830847 1422398 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1209 05:40:54.830917 1422398 kubeadm.go:401] StartCluster: {Name:newest-cni-262540 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-262540 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 05:40:54.831012 1422398 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1209 05:40:54.831072 1422398 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 05:40:54.863416 1422398 cri.go:89] found id: ""
	I1209 05:40:54.863486 1422398 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1209 05:40:54.871043 1422398 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 05:40:54.878854 1422398 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1209 05:40:54.878952 1422398 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 05:40:54.886794 1422398 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 05:40:54.886847 1422398 kubeadm.go:158] found existing configuration files:
	
	I1209 05:40:54.886908 1422398 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1209 05:40:54.894435 1422398 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 05:40:54.894550 1422398 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 05:40:54.901704 1422398 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1209 05:40:54.909273 1422398 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 05:40:54.909385 1422398 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 05:40:54.916897 1422398 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1209 05:40:54.924926 1422398 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 05:40:54.925024 1422398 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 05:40:54.932137 1422398 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1209 05:40:54.939823 1422398 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 05:40:54.939911 1422398 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 05:40:54.947153 1422398 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1209 05:40:54.985945 1422398 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1209 05:40:54.986006 1422398 kubeadm.go:319] [preflight] Running pre-flight checks
	I1209 05:40:55.098038 1422398 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1209 05:40:55.098124 1422398 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1209 05:40:55.098168 1422398 kubeadm.go:319] OS: Linux
	I1209 05:40:55.098224 1422398 kubeadm.go:319] CGROUPS_CPU: enabled
	I1209 05:40:55.098279 1422398 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1209 05:40:55.098332 1422398 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1209 05:40:55.098392 1422398 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1209 05:40:55.098445 1422398 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1209 05:40:55.098502 1422398 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1209 05:40:55.098554 1422398 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1209 05:40:55.098607 1422398 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1209 05:40:55.098661 1422398 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1209 05:40:55.213327 1422398 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1209 05:40:55.213517 1422398 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1209 05:40:55.213698 1422398 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1209 05:40:55.232400 1422398 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1209 05:40:55.239096 1422398 out.go:252]   - Generating certificates and keys ...
	I1209 05:40:55.239277 1422398 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1209 05:40:55.239377 1422398 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1209 05:40:55.754714 1422398 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1209 05:40:56.183780 1422398 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1209 05:40:56.537089 1422398 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1209 05:40:56.838991 1422398 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1209 05:40:57.144061 1422398 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1209 05:40:57.144319 1422398 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-262540] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1209 05:40:57.237080 1422398 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1209 05:40:57.237305 1422398 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-262540] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1209 05:40:57.410307 1422398 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1209 05:40:57.494105 1422398 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1209 05:40:57.828849 1422398 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1209 05:40:57.829173 1422398 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1209 05:40:58.186047 1422398 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1209 05:40:58.553535 1422398 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1209 05:40:58.846953 1422398 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1209 05:40:59.216978 1422398 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1209 05:40:59.442501 1422398 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1209 05:40:59.443253 1422398 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1209 05:40:59.445958 1422398 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1209 05:40:59.449559 1422398 out.go:252]   - Booting up control plane ...
	I1209 05:40:59.449660 1422398 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1209 05:40:59.449739 1422398 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1209 05:40:59.449809 1422398 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1209 05:40:59.466855 1422398 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1209 05:40:59.467191 1422398 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1209 05:40:59.475169 1422398 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1209 05:40:59.475483 1422398 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1209 05:40:59.475706 1422398 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1209 05:40:59.606469 1422398 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1209 05:40:59.606609 1422398 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1209 05:43:42.940465 1404644 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001103315s
	I1209 05:43:42.940494 1404644 kubeadm.go:319] 
	I1209 05:43:42.940552 1404644 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1209 05:43:42.940585 1404644 kubeadm.go:319] 	- The kubelet is not running
	I1209 05:43:42.940690 1404644 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1209 05:43:42.940694 1404644 kubeadm.go:319] 
	I1209 05:43:42.940799 1404644 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1209 05:43:42.940831 1404644 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1209 05:43:42.940862 1404644 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1209 05:43:42.940866 1404644 kubeadm.go:319] 
	I1209 05:43:42.944449 1404644 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1209 05:43:42.944876 1404644 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1209 05:43:42.944989 1404644 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1209 05:43:42.945227 1404644 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1209 05:43:42.945235 1404644 kubeadm.go:319] 
	I1209 05:43:42.945305 1404644 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1209 05:43:42.945358 1404644 kubeadm.go:403] duration metric: took 8m7.791342576s to StartCluster
	I1209 05:43:42.945399 1404644 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:43:42.945466 1404644 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:43:42.969310 1404644 cri.go:89] found id: ""
	I1209 05:43:42.969335 1404644 logs.go:282] 0 containers: []
	W1209 05:43:42.969343 1404644 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:43:42.969349 1404644 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:43:42.969414 1404644 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:43:42.997525 1404644 cri.go:89] found id: ""
	I1209 05:43:42.997547 1404644 logs.go:282] 0 containers: []
	W1209 05:43:42.997556 1404644 logs.go:284] No container was found matching "etcd"
	I1209 05:43:42.997562 1404644 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:43:42.997619 1404644 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:43:43.022335 1404644 cri.go:89] found id: ""
	I1209 05:43:43.022360 1404644 logs.go:282] 0 containers: []
	W1209 05:43:43.022369 1404644 logs.go:284] No container was found matching "coredns"
	I1209 05:43:43.022380 1404644 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:43:43.022440 1404644 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:43:43.046700 1404644 cri.go:89] found id: ""
	I1209 05:43:43.046725 1404644 logs.go:282] 0 containers: []
	W1209 05:43:43.046734 1404644 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:43:43.046739 1404644 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:43:43.046797 1404644 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:43:43.071875 1404644 cri.go:89] found id: ""
	I1209 05:43:43.071906 1404644 logs.go:282] 0 containers: []
	W1209 05:43:43.071915 1404644 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:43:43.071921 1404644 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:43:43.071986 1404644 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:43:43.096153 1404644 cri.go:89] found id: ""
	I1209 05:43:43.096176 1404644 logs.go:282] 0 containers: []
	W1209 05:43:43.096190 1404644 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:43:43.096198 1404644 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:43:43.096259 1404644 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:43:43.121898 1404644 cri.go:89] found id: ""
	I1209 05:43:43.121922 1404644 logs.go:282] 0 containers: []
	W1209 05:43:43.121931 1404644 logs.go:284] No container was found matching "kindnet"
	I1209 05:43:43.121940 1404644 logs.go:123] Gathering logs for containerd ...
	I1209 05:43:43.121951 1404644 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:43:43.163306 1404644 logs.go:123] Gathering logs for container status ...
	I1209 05:43:43.163339 1404644 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:43:43.207532 1404644 logs.go:123] Gathering logs for kubelet ...
	I1209 05:43:43.207567 1404644 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:43:43.277243 1404644 logs.go:123] Gathering logs for dmesg ...
	I1209 05:43:43.277279 1404644 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:43:43.298477 1404644 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:43:43.298507 1404644 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:43:43.365347 1404644 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:43:43.357461    5461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:43:43.358090    5461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:43:43.359781    5461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:43:43.360115    5461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:43:43.361539    5461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:43:43.357461    5461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:43:43.358090    5461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:43:43.359781    5461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:43:43.360115    5461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:43:43.361539    5461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1209 05:43:43.365381 1404644 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001103315s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1209 05:43:43.365413 1404644 out.go:285] * 
	W1209 05:43:43.365475 1404644 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001103315s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1209 05:43:43.365493 1404644 out.go:285] * 
	W1209 05:43:43.367868 1404644 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 05:43:43.374302 1404644 out.go:203] 
	W1209 05:43:43.377044 1404644 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001103315s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1209 05:43:43.377091 1404644 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1209 05:43:43.377112 1404644 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1209 05:43:43.380387 1404644 out.go:203] 
	I1209 05:44:59.607480 1422398 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.00121326s
	I1209 05:44:59.607519 1422398 kubeadm.go:319] 
	I1209 05:44:59.607618 1422398 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1209 05:44:59.607726 1422398 kubeadm.go:319] 	- The kubelet is not running
	I1209 05:44:59.607978 1422398 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1209 05:44:59.607986 1422398 kubeadm.go:319] 
	I1209 05:44:59.608454 1422398 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1209 05:44:59.608521 1422398 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1209 05:44:59.608577 1422398 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1209 05:44:59.608582 1422398 kubeadm.go:319] 
	I1209 05:44:59.613231 1422398 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1209 05:44:59.613828 1422398 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1209 05:44:59.613984 1422398 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1209 05:44:59.614238 1422398 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1209 05:44:59.614252 1422398 kubeadm.go:319] 
	I1209 05:44:59.614382 1422398 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1209 05:44:59.614451 1422398 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-262540] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-262540] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00121326s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1209 05:44:59.614533 1422398 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1209 05:45:00.103679 1422398 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 05:45:00.173261 1422398 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1209 05:45:00.173416 1422398 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 05:45:00.208469 1422398 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 05:45:00.208544 1422398 kubeadm.go:158] found existing configuration files:
	
	I1209 05:45:00.208645 1422398 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1209 05:45:00.241859 1422398 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 05:45:00.241972 1422398 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 05:45:00.286462 1422398 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1209 05:45:00.323140 1422398 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 05:45:00.323227 1422398 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 05:45:00.375275 1422398 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1209 05:45:00.422213 1422398 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 05:45:00.422297 1422398 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 05:45:00.482732 1422398 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1209 05:45:00.551039 1422398 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 05:45:00.551196 1422398 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 05:45:00.603184 1422398 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1209 05:45:00.737020 1422398 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1209 05:45:00.737088 1422398 kubeadm.go:319] [preflight] Running pre-flight checks
	I1209 05:45:00.854575 1422398 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1209 05:45:00.854658 1422398 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1209 05:45:00.854699 1422398 kubeadm.go:319] OS: Linux
	I1209 05:45:00.854747 1422398 kubeadm.go:319] CGROUPS_CPU: enabled
	I1209 05:45:00.854798 1422398 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1209 05:45:00.854848 1422398 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1209 05:45:00.854898 1422398 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1209 05:45:00.854948 1422398 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1209 05:45:00.854997 1422398 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1209 05:45:00.855044 1422398 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1209 05:45:00.855095 1422398 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1209 05:45:00.855143 1422398 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1209 05:45:00.931863 1422398 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1209 05:45:00.931972 1422398 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1209 05:45:00.932087 1422398 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1209 05:45:00.939118 1422398 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1209 05:45:00.942776 1422398 out.go:252]   - Generating certificates and keys ...
	I1209 05:45:00.942945 1422398 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1209 05:45:00.943041 1422398 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1209 05:45:00.943160 1422398 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1209 05:45:00.943259 1422398 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1209 05:45:00.943342 1422398 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1209 05:45:00.943403 1422398 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1209 05:45:00.943494 1422398 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1209 05:45:00.943590 1422398 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1209 05:45:00.943707 1422398 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1209 05:45:00.943793 1422398 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1209 05:45:00.944009 1422398 kubeadm.go:319] [certs] Using the existing "sa" key
	I1209 05:45:00.944161 1422398 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1209 05:45:01.208491 1422398 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1209 05:45:01.530404 1422398 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1209 05:45:01.608144 1422398 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1209 05:45:02.097879 1422398 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1209 05:45:02.557838 1422398 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1209 05:45:02.558503 1422398 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1209 05:45:02.561184 1422398 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1209 05:45:02.564240 1422398 out.go:252]   - Booting up control plane ...
	I1209 05:45:02.564359 1422398 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1209 05:45:02.564446 1422398 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1209 05:45:02.566129 1422398 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1209 05:45:02.587668 1422398 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1209 05:45:02.588156 1422398 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1209 05:45:02.596446 1422398 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1209 05:45:02.596549 1422398 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1209 05:45:02.596595 1422398 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1209 05:45:02.726040 1422398 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1209 05:45:02.726160 1422398 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 09 05:35:22 no-preload-842269 containerd[758]: time="2025-12-09T05:35:22.036308692Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-scheduler:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 09 05:35:23 no-preload-842269 containerd[758]: time="2025-12-09T05:35:23.816614233Z" level=info msg="No images store for sha256:eb9020767c0d3bbd754f3f52cbe4c8bdd935dd5862604d6dc0b1f10422189544"
	Dec 09 05:35:23 no-preload-842269 containerd[758]: time="2025-12-09T05:35:23.819781105Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.0-beta.0\""
	Dec 09 05:35:23 no-preload-842269 containerd[758]: time="2025-12-09T05:35:23.827929881Z" level=info msg="ImageCreate event name:\"sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 09 05:35:23 no-preload-842269 containerd[758]: time="2025-12-09T05:35:23.828477017Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-proxy:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 09 05:35:25 no-preload-842269 containerd[758]: time="2025-12-09T05:35:25.333353027Z" level=info msg="No images store for sha256:5ed8f231f07481c657ad0e1d039921948e7abbc30ef6215465129012c4c4a508"
	Dec 09 05:35:25 no-preload-842269 containerd[758]: time="2025-12-09T05:35:25.336577825Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-beta.0\""
	Dec 09 05:35:25 no-preload-842269 containerd[758]: time="2025-12-09T05:35:25.345148784Z" level=info msg="ImageCreate event name:\"sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 09 05:35:25 no-preload-842269 containerd[758]: time="2025-12-09T05:35:25.345908041Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 09 05:35:27 no-preload-842269 containerd[758]: time="2025-12-09T05:35:27.120561243Z" level=info msg="No images store for sha256:64f3fb0a3392f487dbd4300c920f76dc3de2961e11fd6bfbedc75c0d25b1954c"
	Dec 09 05:35:27 no-preload-842269 containerd[758]: time="2025-12-09T05:35:27.122833000Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-beta.0\""
	Dec 09 05:35:27 no-preload-842269 containerd[758]: time="2025-12-09T05:35:27.131014710Z" level=info msg="ImageCreate event name:\"sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 09 05:35:27 no-preload-842269 containerd[758]: time="2025-12-09T05:35:27.132076917Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 09 05:35:28 no-preload-842269 containerd[758]: time="2025-12-09T05:35:28.575071692Z" level=info msg="No images store for sha256:5e4a4fe83792bf529a4e283e09069cf50cc9882d04168a33903ed6809a492e61"
	Dec 09 05:35:28 no-preload-842269 containerd[758]: time="2025-12-09T05:35:28.577744678Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\""
	Dec 09 05:35:28 no-preload-842269 containerd[758]: time="2025-12-09T05:35:28.588706439Z" level=info msg="ImageCreate event name:\"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 09 05:35:28 no-preload-842269 containerd[758]: time="2025-12-09T05:35:28.589547631Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 09 05:35:30 no-preload-842269 containerd[758]: time="2025-12-09T05:35:30.380855874Z" level=info msg="No images store for sha256:89a52ae86f116708cd5ba0d54dfbf2ae3011f126ee9161c4afb19bf2a51ef285"
	Dec 09 05:35:30 no-preload-842269 containerd[758]: time="2025-12-09T05:35:30.384556393Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\""
	Dec 09 05:35:30 no-preload-842269 containerd[758]: time="2025-12-09T05:35:30.398357527Z" level=info msg="ImageCreate event name:\"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 09 05:35:30 no-preload-842269 containerd[758]: time="2025-12-09T05:35:30.402958452Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 09 05:35:30 no-preload-842269 containerd[758]: time="2025-12-09T05:35:30.951034968Z" level=info msg="No images store for sha256:7475c7d18769df89a804d5bebf679dbf94886f3626f07a2be923beaa0cc7e5b0"
	Dec 09 05:35:30 no-preload-842269 containerd[758]: time="2025-12-09T05:35:30.953249078Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\""
	Dec 09 05:35:30 no-preload-842269 containerd[758]: time="2025-12-09T05:35:30.967195965Z" level=info msg="ImageCreate event name:\"sha256:66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 09 05:35:30 no-preload-842269 containerd[758]: time="2025-12-09T05:35:30.967501105Z" level=info msg="ImageUpdate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:45:17.285731    6758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:45:17.286151    6758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:45:17.287780    6758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:45:17.288446    6758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:45:17.289962    6758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +25.904254] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:14] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:16] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:18] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:19] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:30] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:32] overlayfs: idmapped layers are currently not supported
	[ +28.114653] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:33] overlayfs: idmapped layers are currently not supported
	[ +23.720849] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:34] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:35] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:36] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:37] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:38] overlayfs: idmapped layers are currently not supported
	[ +23.656275] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:39] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:57] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:58] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:00] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:02] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:03] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:05] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:06] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 9 05:31] overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	
	
	==> kernel <==
	 05:45:17 up  8:27,  0 user,  load average: 1.27, 1.29, 1.74
	Linux no-preload-842269 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 09 05:45:14 no-preload-842269 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 09 05:45:14 no-preload-842269 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 442.
	Dec 09 05:45:14 no-preload-842269 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 05:45:14 no-preload-842269 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 05:45:14 no-preload-842269 kubelet[6636]: E1209 05:45:14.975653    6636 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 09 05:45:14 no-preload-842269 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 09 05:45:14 no-preload-842269 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 09 05:45:15 no-preload-842269 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 443.
	Dec 09 05:45:15 no-preload-842269 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 05:45:15 no-preload-842269 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 05:45:15 no-preload-842269 kubelet[6642]: E1209 05:45:15.723309    6642 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 09 05:45:15 no-preload-842269 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 09 05:45:15 no-preload-842269 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 09 05:45:16 no-preload-842269 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 444.
	Dec 09 05:45:16 no-preload-842269 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 05:45:16 no-preload-842269 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 05:45:16 no-preload-842269 kubelet[6653]: E1209 05:45:16.504423    6653 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 09 05:45:16 no-preload-842269 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 09 05:45:16 no-preload-842269 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 09 05:45:17 no-preload-842269 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 445.
	Dec 09 05:45:17 no-preload-842269 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 05:45:17 no-preload-842269 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 05:45:17 no-preload-842269 kubelet[6750]: E1209 05:45:17.244852    6750 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 09 05:45:17 no-preload-842269 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 09 05:45:17 no-preload-842269 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-842269 -n no-preload-842269
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-842269 -n no-preload-842269: exit status 6 (356.083171ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1209 05:45:17.756720 1429564 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-842269" does not appear in /home/jenkins/minikube-integration/22081-1142328/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "no-preload-842269" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (89.69s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (370.05s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-842269 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
E1209 05:45:31.235846 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/default-k8s-diff-port-564611/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 05:45:32.751079 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 05:45:42.067464 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-717497/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 05:46:06.731701 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/addons-221952/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 05:46:53.157991 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/default-k8s-diff-port-564611/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 05:47:38.985972 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-717497/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 05:48:26.932823 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/old-k8s-version-384009/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p no-preload-842269 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: exit status 80 (6m8.449320053s)

                                                
                                                
-- stdout --
	* [no-preload-842269] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22081
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22081-1142328/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-1142328/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "no-preload-842269" primary control-plane node in "no-preload-842269" cluster
	* Pulling base image v0.0.48-1765184860-22066 ...
	* Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image registry.k8s.io/echoserver:1.4
	* Enabled addons: 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 05:45:19.304985 1429857 out.go:360] Setting OutFile to fd 1 ...
	I1209 05:45:19.305094 1429857 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 05:45:19.305101 1429857 out.go:374] Setting ErrFile to fd 2...
	I1209 05:45:19.305106 1429857 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 05:45:19.305469 1429857 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-1142328/.minikube/bin
	I1209 05:45:19.305897 1429857 out.go:368] Setting JSON to false
	I1209 05:45:19.307371 1429857 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":30443,"bootTime":1765228677,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1209 05:45:19.307474 1429857 start.go:143] virtualization:  
	I1209 05:45:19.312362 1429857 out.go:179] * [no-preload-842269] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1209 05:45:19.315432 1429857 out.go:179]   - MINIKUBE_LOCATION=22081
	I1209 05:45:19.315644 1429857 notify.go:221] Checking for updates...
	I1209 05:45:19.321156 1429857 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 05:45:19.324049 1429857 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22081-1142328/kubeconfig
	I1209 05:45:19.326954 1429857 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-1142328/.minikube
	I1209 05:45:19.329810 1429857 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1209 05:45:19.332669 1429857 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 05:45:19.336051 1429857 config.go:182] Loaded profile config "no-preload-842269": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1209 05:45:19.336708 1429857 driver.go:422] Setting default libvirt URI to qemu:///system
	I1209 05:45:19.364223 1429857 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1209 05:45:19.364347 1429857 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 05:45:19.423199 1429857 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-09 05:45:19.414226912 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1209 05:45:19.423304 1429857 docker.go:319] overlay module found
	I1209 05:45:19.426467 1429857 out.go:179] * Using the docker driver based on existing profile
	I1209 05:45:19.429450 1429857 start.go:309] selected driver: docker
	I1209 05:45:19.429469 1429857 start.go:927] validating driver "docker" against &{Name:no-preload-842269 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-842269 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2
62144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 05:45:19.429573 1429857 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 05:45:19.430271 1429857 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 05:45:19.484934 1429857 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-09 05:45:19.476108747 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1209 05:45:19.485260 1429857 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 05:45:19.485294 1429857 cni.go:84] Creating CNI manager for ""
	I1209 05:45:19.485352 1429857 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1209 05:45:19.485394 1429857 start.go:353] cluster config:
	{Name:no-preload-842269 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-842269 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 05:45:19.488591 1429857 out.go:179] * Starting "no-preload-842269" primary control-plane node in "no-preload-842269" cluster
	I1209 05:45:19.491427 1429857 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1209 05:45:19.494310 1429857 out.go:179] * Pulling base image v0.0.48-1765184860-22066 ...
	I1209 05:45:19.497153 1429857 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1209 05:45:19.497221 1429857 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local docker daemon
	I1209 05:45:19.497291 1429857 profile.go:143] Saving config to /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/no-preload-842269/config.json ...
	I1209 05:45:19.497571 1429857 cache.go:107] acquiring lock: {Name:mkf65d4ffaf3daf987b7ba0301a9962f00106981 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 05:45:19.497666 1429857 cache.go:115] /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1209 05:45:19.497678 1429857 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22081-1142328/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 116.666µs
	I1209 05:45:19.497690 1429857 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1209 05:45:19.497702 1429857 cache.go:107] acquiring lock: {Name:mk4d0c4ab95f11691dbecfbd7b2c72b3028abf9f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 05:45:19.497735 1429857 cache.go:115] /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1209 05:45:19.497745 1429857 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22081-1142328/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 45.152µs
	I1209 05:45:19.497752 1429857 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1209 05:45:19.497766 1429857 cache.go:107] acquiring lock: {Name:mk7cb8e420e05ffddcb417dedf3ddace46afcf1b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 05:45:19.497807 1429857 cache.go:115] /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1209 05:45:19.497815 1429857 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22081-1142328/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 50.033µs
	I1209 05:45:19.497822 1429857 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1209 05:45:19.497835 1429857 cache.go:107] acquiring lock: {Name:mka2eb1b7c29ae7ae604d5f65c47b25198cfb45b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 05:45:19.497867 1429857 cache.go:115] /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1209 05:45:19.497876 1429857 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22081-1142328/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 42.009µs
	I1209 05:45:19.497883 1429857 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1209 05:45:19.497892 1429857 cache.go:107] acquiring lock: {Name:mkade1779cb2ecc1c54a36bd1719bf2ef87bdf51 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 05:45:19.497922 1429857 cache.go:115] /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1209 05:45:19.497931 1429857 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22081-1142328/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 40.704µs
	I1209 05:45:19.497942 1429857 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1209 05:45:19.497955 1429857 cache.go:107] acquiring lock: {Name:mk604b76e7428f7b39bf507a7086fea810617cc7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 05:45:19.497987 1429857 cache.go:115] /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1209 05:45:19.497996 1429857 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22081-1142328/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 42.46µs
	I1209 05:45:19.498002 1429857 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1209 05:45:19.498011 1429857 cache.go:107] acquiring lock: {Name:mk605cb0bdcc667f1a6cc01dc2d318b41822c88f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 05:45:19.498037 1429857 cache.go:115] /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1209 05:45:19.498046 1429857 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/22081-1142328/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 36.306µs
	I1209 05:45:19.498052 1429857 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1209 05:45:19.498060 1429857 cache.go:107] acquiring lock: {Name:mk288542758fec96b5cb8ac3de75700c31bfbfc0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 05:45:19.498089 1429857 cache.go:115] /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1209 05:45:19.498098 1429857 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22081-1142328/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 38.916µs
	I1209 05:45:19.498104 1429857 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1209 05:45:19.498110 1429857 cache.go:87] Successfully saved all images to host disk.
	I1209 05:45:19.517152 1429857 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local docker daemon, skipping pull
	I1209 05:45:19.517175 1429857 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c exists in daemon, skipping load
	I1209 05:45:19.517194 1429857 cache.go:243] Successfully downloaded all kic artifacts
	I1209 05:45:19.517225 1429857 start.go:360] acquireMachinesLock for no-preload-842269: {Name:mk19b7be61094a19b29603fb95f6d7b282529614 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 05:45:19.517288 1429857 start.go:364] duration metric: took 43.707µs to acquireMachinesLock for "no-preload-842269"
	I1209 05:45:19.517311 1429857 start.go:96] Skipping create...Using existing machine configuration
	I1209 05:45:19.517320 1429857 fix.go:54] fixHost starting: 
	I1209 05:45:19.517582 1429857 cli_runner.go:164] Run: docker container inspect no-preload-842269 --format={{.State.Status}}
	I1209 05:45:19.535058 1429857 fix.go:112] recreateIfNeeded on no-preload-842269: state=Stopped err=<nil>
	W1209 05:45:19.535086 1429857 fix.go:138] unexpected machine state, will restart: <nil>
	I1209 05:45:19.538423 1429857 out.go:252] * Restarting existing docker container for "no-preload-842269" ...
	I1209 05:45:19.538508 1429857 cli_runner.go:164] Run: docker start no-preload-842269
	I1209 05:45:19.801093 1429857 cli_runner.go:164] Run: docker container inspect no-preload-842269 --format={{.State.Status}}
	I1209 05:45:19.824109 1429857 kic.go:430] container "no-preload-842269" state is running.
	I1209 05:45:19.824800 1429857 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-842269
	I1209 05:45:19.850927 1429857 profile.go:143] Saving config to /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/no-preload-842269/config.json ...
	I1209 05:45:19.851169 1429857 machine.go:94] provisionDockerMachine start ...
	I1209 05:45:19.851233 1429857 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-842269
	I1209 05:45:19.872359 1429857 main.go:143] libmachine: Using SSH client type: native
	I1209 05:45:19.872683 1429857 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 34210 <nil> <nil>}
	I1209 05:45:19.872698 1429857 main.go:143] libmachine: About to run SSH command:
	hostname
	I1209 05:45:19.873510 1429857 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1209 05:45:23.031698 1429857 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-842269
	
	I1209 05:45:23.031723 1429857 ubuntu.go:182] provisioning hostname "no-preload-842269"
	I1209 05:45:23.031788 1429857 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-842269
	I1209 05:45:23.049528 1429857 main.go:143] libmachine: Using SSH client type: native
	I1209 05:45:23.049842 1429857 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 34210 <nil> <nil>}
	I1209 05:45:23.049866 1429857 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-842269 && echo "no-preload-842269" | sudo tee /etc/hostname
	I1209 05:45:23.212560 1429857 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-842269
	
	I1209 05:45:23.212638 1429857 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-842269
	I1209 05:45:23.230939 1429857 main.go:143] libmachine: Using SSH client type: native
	I1209 05:45:23.231248 1429857 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 34210 <nil> <nil>}
	I1209 05:45:23.231264 1429857 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-842269' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-842269/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-842269' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 05:45:23.384444 1429857 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1209 05:45:23.384483 1429857 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22081-1142328/.minikube CaCertPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22081-1142328/.minikube}
	I1209 05:45:23.384506 1429857 ubuntu.go:190] setting up certificates
	I1209 05:45:23.384523 1429857 provision.go:84] configureAuth start
	I1209 05:45:23.384590 1429857 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-842269
	I1209 05:45:23.401432 1429857 provision.go:143] copyHostCerts
	I1209 05:45:23.401503 1429857 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.pem, removing ...
	I1209 05:45:23.401518 1429857 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.pem
	I1209 05:45:23.401593 1429857 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.pem (1078 bytes)
	I1209 05:45:23.401705 1429857 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-1142328/.minikube/cert.pem, removing ...
	I1209 05:45:23.401714 1429857 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-1142328/.minikube/cert.pem
	I1209 05:45:23.401742 1429857 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22081-1142328/.minikube/cert.pem (1123 bytes)
	I1209 05:45:23.401834 1429857 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-1142328/.minikube/key.pem, removing ...
	I1209 05:45:23.401844 1429857 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-1142328/.minikube/key.pem
	I1209 05:45:23.401870 1429857 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22081-1142328/.minikube/key.pem (1675 bytes)
	I1209 05:45:23.401918 1429857 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca-key.pem org=jenkins.no-preload-842269 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-842269]
	I1209 05:45:24.117829 1429857 provision.go:177] copyRemoteCerts
	I1209 05:45:24.117899 1429857 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 05:45:24.117948 1429857 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-842269
	I1209 05:45:24.136847 1429857 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34210 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/no-preload-842269/id_rsa Username:docker}
	I1209 05:45:24.243917 1429857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1209 05:45:24.261228 1429857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1209 05:45:24.278688 1429857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1209 05:45:24.295602 1429857 provision.go:87] duration metric: took 911.052498ms to configureAuth
	I1209 05:45:24.295630 1429857 ubuntu.go:206] setting minikube options for container-runtime
	I1209 05:45:24.295821 1429857 config.go:182] Loaded profile config "no-preload-842269": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1209 05:45:24.295834 1429857 machine.go:97] duration metric: took 4.444658101s to provisionDockerMachine
	I1209 05:45:24.295843 1429857 start.go:293] postStartSetup for "no-preload-842269" (driver="docker")
	I1209 05:45:24.295853 1429857 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 05:45:24.295939 1429857 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 05:45:24.295989 1429857 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-842269
	I1209 05:45:24.313358 1429857 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34210 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/no-preload-842269/id_rsa Username:docker}
	I1209 05:45:24.419729 1429857 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 05:45:24.423044 1429857 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1209 05:45:24.423074 1429857 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1209 05:45:24.423102 1429857 filesync.go:126] Scanning /home/jenkins/minikube-integration/22081-1142328/.minikube/addons for local assets ...
	I1209 05:45:24.423160 1429857 filesync.go:126] Scanning /home/jenkins/minikube-integration/22081-1142328/.minikube/files for local assets ...
	I1209 05:45:24.423286 1429857 filesync.go:149] local asset: /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/ssl/certs/11442312.pem -> 11442312.pem in /etc/ssl/certs
	I1209 05:45:24.423403 1429857 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 05:45:24.430577 1429857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/ssl/certs/11442312.pem --> /etc/ssl/certs/11442312.pem (1708 bytes)
	I1209 05:45:24.448642 1429857 start.go:296] duration metric: took 152.783704ms for postStartSetup
	I1209 05:45:24.448752 1429857 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1209 05:45:24.448804 1429857 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-842269
	I1209 05:45:24.475577 1429857 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34210 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/no-preload-842269/id_rsa Username:docker}
	I1209 05:45:24.577211 1429857 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1209 05:45:24.581897 1429857 fix.go:56] duration metric: took 5.064569479s for fixHost
	I1209 05:45:24.581929 1429857 start.go:83] releasing machines lock for "no-preload-842269", held for 5.064623763s
	I1209 05:45:24.582003 1429857 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-842269
	I1209 05:45:24.598849 1429857 ssh_runner.go:195] Run: cat /version.json
	I1209 05:45:24.598910 1429857 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-842269
	I1209 05:45:24.599176 1429857 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 05:45:24.599236 1429857 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-842269
	I1209 05:45:24.617491 1429857 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34210 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/no-preload-842269/id_rsa Username:docker}
	I1209 05:45:24.625861 1429857 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34210 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/no-preload-842269/id_rsa Username:docker}
	I1209 05:45:24.719702 1429857 ssh_runner.go:195] Run: systemctl --version
	I1209 05:45:24.811867 1429857 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 05:45:24.816351 1429857 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 05:45:24.816436 1429857 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 05:45:24.824370 1429857 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1209 05:45:24.824393 1429857 start.go:496] detecting cgroup driver to use...
	I1209 05:45:24.824424 1429857 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1209 05:45:24.824478 1429857 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1209 05:45:24.842259 1429857 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1209 05:45:24.856877 1429857 docker.go:218] disabling cri-docker service (if available) ...
	I1209 05:45:24.856943 1429857 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 05:45:24.872872 1429857 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 05:45:24.886154 1429857 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 05:45:24.999208 1429857 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 05:45:25.121326 1429857 docker.go:234] disabling docker service ...
	I1209 05:45:25.121413 1429857 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 05:45:25.137073 1429857 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 05:45:25.150656 1429857 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 05:45:25.286510 1429857 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 05:45:25.394076 1429857 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 05:45:25.406549 1429857 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 05:45:25.420965 1429857 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1209 05:45:25.429321 1429857 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1209 05:45:25.437986 1429857 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1209 05:45:25.438077 1429857 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1209 05:45:25.447132 1429857 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1209 05:45:25.456037 1429857 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1209 05:45:25.464470 1429857 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1209 05:45:25.472760 1429857 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 05:45:25.480756 1429857 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1209 05:45:25.489194 1429857 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1209 05:45:25.497557 1429857 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1209 05:45:25.506153 1429857 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 05:45:25.513357 1429857 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 05:45:25.520101 1429857 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 05:45:25.626477 1429857 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1209 05:45:25.729432 1429857 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1209 05:45:25.729500 1429857 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1209 05:45:25.733824 1429857 start.go:564] Will wait 60s for crictl version
	I1209 05:45:25.733937 1429857 ssh_runner.go:195] Run: which crictl
	I1209 05:45:25.738223 1429857 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1209 05:45:25.764110 1429857 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1209 05:45:25.764179 1429857 ssh_runner.go:195] Run: containerd --version
	I1209 05:45:25.784097 1429857 ssh_runner.go:195] Run: containerd --version
	I1209 05:45:25.809525 1429857 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1209 05:45:25.812650 1429857 cli_runner.go:164] Run: docker network inspect no-preload-842269 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1209 05:45:25.828380 1429857 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1209 05:45:25.832220 1429857 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 05:45:25.842204 1429857 kubeadm.go:884] updating cluster {Name:no-preload-842269 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-842269 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 05:45:25.842335 1429857 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1209 05:45:25.842398 1429857 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 05:45:25.869412 1429857 containerd.go:627] all images are preloaded for containerd runtime.
	I1209 05:45:25.869438 1429857 cache_images.go:86] Images are preloaded, skipping loading
	I1209 05:45:25.869445 1429857 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1209 05:45:25.869544 1429857 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-842269 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-842269 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 05:45:25.869609 1429857 ssh_runner.go:195] Run: sudo crictl info
	I1209 05:45:25.894672 1429857 cni.go:84] Creating CNI manager for ""
	I1209 05:45:25.894698 1429857 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1209 05:45:25.894720 1429857 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1209 05:45:25.894751 1429857 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-842269 NodeName:no-preload-842269 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1209 05:45:25.894907 1429857 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-842269"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 05:45:25.894981 1429857 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1209 05:45:25.902766 1429857 binaries.go:51] Found k8s binaries, skipping transfer
	I1209 05:45:25.902838 1429857 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1209 05:45:25.910455 1429857 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1209 05:45:25.923076 1429857 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1209 05:45:25.937650 1429857 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1209 05:45:25.951420 1429857 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1209 05:45:25.955331 1429857 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 05:45:25.964795 1429857 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 05:45:26.082166 1429857 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 05:45:26.100679 1429857 certs.go:69] Setting up /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/no-preload-842269 for IP: 192.168.85.2
	I1209 05:45:26.100745 1429857 certs.go:195] generating shared ca certs ...
	I1209 05:45:26.100786 1429857 certs.go:227] acquiring lock for ca certs: {Name:mk15788702f8c4e23b5aeab3f44961d296fab259 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 05:45:26.100943 1429857 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.key
	I1209 05:45:26.101025 1429857 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/proxy-client-ca.key
	I1209 05:45:26.101056 1429857 certs.go:257] generating profile certs ...
	I1209 05:45:26.101186 1429857 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/no-preload-842269/client.key
	I1209 05:45:26.101295 1429857 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/no-preload-842269/apiserver.key.135a6aab
	I1209 05:45:26.101368 1429857 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/no-preload-842269/proxy-client.key
	I1209 05:45:26.101513 1429857 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/1144231.pem (1338 bytes)
	W1209 05:45:26.101579 1429857 certs.go:480] ignoring /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/1144231_empty.pem, impossibly tiny 0 bytes
	I1209 05:45:26.101605 1429857 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca-key.pem (1679 bytes)
	I1209 05:45:26.101652 1429857 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem (1078 bytes)
	I1209 05:45:26.101704 1429857 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/cert.pem (1123 bytes)
	I1209 05:45:26.101777 1429857 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/key.pem (1675 bytes)
	I1209 05:45:26.101861 1429857 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/ssl/certs/11442312.pem (1708 bytes)
	I1209 05:45:26.102562 1429857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 05:45:26.122800 1429857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1209 05:45:26.142042 1429857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 05:45:26.161502 1429857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1209 05:45:26.179586 1429857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/no-preload-842269/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1209 05:45:26.196698 1429857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/no-preload-842269/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1209 05:45:26.212945 1429857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/no-preload-842269/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 05:45:26.230416 1429857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/no-preload-842269/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1209 05:45:26.247147 1429857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/ssl/certs/11442312.pem --> /usr/share/ca-certificates/11442312.pem (1708 bytes)
	I1209 05:45:26.265734 1429857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 05:45:26.282961 1429857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/1144231.pem --> /usr/share/ca-certificates/1144231.pem (1338 bytes)
	I1209 05:45:26.300125 1429857 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 05:45:26.312156 1429857 ssh_runner.go:195] Run: openssl version
	I1209 05:45:26.318566 1429857 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/11442312.pem
	I1209 05:45:26.329117 1429857 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/11442312.pem /etc/ssl/certs/11442312.pem
	I1209 05:45:26.336403 1429857 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11442312.pem
	I1209 05:45:26.340126 1429857 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 04:18 /usr/share/ca-certificates/11442312.pem
	I1209 05:45:26.340197 1429857 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11442312.pem
	I1209 05:45:26.383366 1429857 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1209 05:45:26.390871 1429857 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1209 05:45:26.398106 1429857 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1209 05:45:26.405814 1429857 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 05:45:26.409683 1429857 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 04:09 /usr/share/ca-certificates/minikubeCA.pem
	I1209 05:45:26.409750 1429857 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 05:45:26.450573 1429857 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1209 05:45:26.458322 1429857 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1144231.pem
	I1209 05:45:26.465833 1429857 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1144231.pem /etc/ssl/certs/1144231.pem
	I1209 05:45:26.473482 1429857 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1144231.pem
	I1209 05:45:26.477501 1429857 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 04:18 /usr/share/ca-certificates/1144231.pem
	I1209 05:45:26.477569 1429857 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1144231.pem
	I1209 05:45:26.518776 1429857 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1209 05:45:26.526248 1429857 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 05:45:26.529980 1429857 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1209 05:45:26.572441 1429857 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1209 05:45:26.613785 1429857 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1209 05:45:26.655322 1429857 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1209 05:45:26.696546 1429857 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1209 05:45:26.739135 1429857 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1209 05:45:26.780278 1429857 kubeadm.go:401] StartCluster: {Name:no-preload-842269 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-842269 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 05:45:26.780376 1429857 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1209 05:45:26.780450 1429857 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 05:45:26.805821 1429857 cri.go:89] found id: ""
	I1209 05:45:26.805924 1429857 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1209 05:45:26.813920 1429857 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1209 05:45:26.813941 1429857 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1209 05:45:26.814022 1429857 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1209 05:45:26.821515 1429857 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1209 05:45:26.821952 1429857 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-842269" does not appear in /home/jenkins/minikube-integration/22081-1142328/kubeconfig
	I1209 05:45:26.822061 1429857 kubeconfig.go:62] /home/jenkins/minikube-integration/22081-1142328/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-842269" cluster setting kubeconfig missing "no-preload-842269" context setting]
	I1209 05:45:26.822332 1429857 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1142328/kubeconfig: {Name:mk0dc127429f88dc4fdfb2d110deebc58207c1b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 05:45:26.823581 1429857 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1209 05:45:26.832049 1429857 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1209 05:45:26.832081 1429857 kubeadm.go:602] duration metric: took 18.134254ms to restartPrimaryControlPlane
	I1209 05:45:26.832090 1429857 kubeadm.go:403] duration metric: took 51.823986ms to StartCluster
	I1209 05:45:26.832105 1429857 settings.go:142] acquiring lock: {Name:mk8fa744e3d74bf8a1cbf5ac275c9f1969ad91a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 05:45:26.832161 1429857 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22081-1142328/kubeconfig
	I1209 05:45:26.832776 1429857 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1142328/kubeconfig: {Name:mk0dc127429f88dc4fdfb2d110deebc58207c1b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 05:45:26.832985 1429857 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1209 05:45:26.833266 1429857 config.go:182] Loaded profile config "no-preload-842269": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1209 05:45:26.833313 1429857 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1209 05:45:26.833376 1429857 addons.go:70] Setting storage-provisioner=true in profile "no-preload-842269"
	I1209 05:45:26.833395 1429857 addons.go:239] Setting addon storage-provisioner=true in "no-preload-842269"
	I1209 05:45:26.833419 1429857 host.go:66] Checking if "no-preload-842269" exists ...
	I1209 05:45:26.833892 1429857 cli_runner.go:164] Run: docker container inspect no-preload-842269 --format={{.State.Status}}
	I1209 05:45:26.834299 1429857 addons.go:70] Setting default-storageclass=true in profile "no-preload-842269"
	I1209 05:45:26.834336 1429857 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-842269"
	I1209 05:45:26.834606 1429857 cli_runner.go:164] Run: docker container inspect no-preload-842269 --format={{.State.Status}}
	I1209 05:45:26.836968 1429857 addons.go:70] Setting dashboard=true in profile "no-preload-842269"
	I1209 05:45:26.837045 1429857 addons.go:239] Setting addon dashboard=true in "no-preload-842269"
	W1209 05:45:26.837069 1429857 addons.go:248] addon dashboard should already be in state true
	I1209 05:45:26.837176 1429857 host.go:66] Checking if "no-preload-842269" exists ...
	I1209 05:45:26.838703 1429857 cli_runner.go:164] Run: docker container inspect no-preload-842269 --format={{.State.Status}}
	I1209 05:45:26.840073 1429857 out.go:179] * Verifying Kubernetes components...
	I1209 05:45:26.843169 1429857 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 05:45:26.862933 1429857 addons.go:239] Setting addon default-storageclass=true in "no-preload-842269"
	I1209 05:45:26.862982 1429857 host.go:66] Checking if "no-preload-842269" exists ...
	I1209 05:45:26.863397 1429857 cli_runner.go:164] Run: docker container inspect no-preload-842269 --format={{.State.Status}}
	I1209 05:45:26.876649 1429857 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 05:45:26.882278 1429857 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 05:45:26.882312 1429857 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1209 05:45:26.882383 1429857 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-842269
	I1209 05:45:26.897424 1429857 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1209 05:45:26.900169 1429857 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1209 05:45:26.906221 1429857 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1209 05:45:26.906259 1429857 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1209 05:45:26.906343 1429857 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-842269
	I1209 05:45:26.924326 1429857 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1209 05:45:26.924348 1429857 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1209 05:45:26.924420 1429857 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-842269
	I1209 05:45:26.963193 1429857 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34210 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/no-preload-842269/id_rsa Username:docker}
	I1209 05:45:26.976834 1429857 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34210 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/no-preload-842269/id_rsa Username:docker}
	I1209 05:45:26.980349 1429857 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34210 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/no-preload-842269/id_rsa Username:docker}
	I1209 05:45:27.069732 1429857 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 05:45:27.125899 1429857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 05:45:27.154169 1429857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1209 05:45:27.157146 1429857 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1209 05:45:27.157166 1429857 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1209 05:45:27.225908 1429857 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1209 05:45:27.225931 1429857 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1209 05:45:27.240153 1429857 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1209 05:45:27.240176 1429857 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1209 05:45:27.253621 1429857 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1209 05:45:27.253645 1429857 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1209 05:45:27.266747 1429857 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1209 05:45:27.266820 1429857 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1209 05:45:27.280090 1429857 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1209 05:45:27.280113 1429857 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1209 05:45:27.292756 1429857 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1209 05:45:27.292820 1429857 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1209 05:45:27.305913 1429857 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1209 05:45:27.305935 1429857 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1209 05:45:27.318675 1429857 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1209 05:45:27.318701 1429857 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1209 05:45:27.331338 1429857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1209 05:45:27.683021 1429857 node_ready.go:35] waiting up to 6m0s for node "no-preload-842269" to be "Ready" ...
	W1209 05:45:27.683361 1429857 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:27.683395 1429857 retry.go:31] will retry after 184.58375ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 05:45:27.683444 1429857 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:27.683451 1429857 retry.go:31] will retry after 269.389918ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 05:45:27.683630 1429857 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:27.683645 1429857 retry.go:31] will retry after 361.009314ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:27.869176 1429857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1209 05:45:27.925658 1429857 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:27.925689 1429857 retry.go:31] will retry after 219.894467ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:27.953869 1429857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1209 05:45:28.020255 1429857 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:28.020343 1429857 retry.go:31] will retry after 279.215289ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:28.045549 1429857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1209 05:45:28.108956 1429857 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:28.108989 1429857 retry.go:31] will retry after 273.063822ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:28.146313 1429857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1209 05:45:28.216595 1429857 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:28.216628 1429857 retry.go:31] will retry after 381.056559ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:28.300048 1429857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1209 05:45:28.357345 1429857 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:28.357379 1429857 retry.go:31] will retry after 809.396818ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:28.382541 1429857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1209 05:45:28.448575 1429857 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:28.448609 1429857 retry.go:31] will retry after 547.183213ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:28.597889 1429857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1209 05:45:28.654047 1429857 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:28.654084 1429857 retry.go:31] will retry after 1.262178547s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:28.996073 1429857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1209 05:45:29.058678 1429857 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:29.058721 1429857 retry.go:31] will retry after 492.162637ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:29.167844 1429857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1209 05:45:29.255905 1429857 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:29.255979 1429857 retry.go:31] will retry after 677.449885ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:29.551561 1429857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1209 05:45:29.613116 1429857 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:29.613148 1429857 retry.go:31] will retry after 949.934015ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 05:45:29.683816 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	I1209 05:45:29.917380 1429857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 05:45:29.933951 1429857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1209 05:45:30.056298 1429857 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:30.056338 1429857 retry.go:31] will retry after 692.239155ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 05:45:30.056406 1429857 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:30.056419 1429857 retry.go:31] will retry after 1.787501236s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:30.563380 1429857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1209 05:45:30.625844 1429857 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:30.625911 1429857 retry.go:31] will retry after 1.269031662s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:30.749550 1429857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1209 05:45:30.807105 1429857 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:30.807136 1429857 retry.go:31] will retry after 2.270752641s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 05:45:31.684175 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	I1209 05:45:31.844648 1429857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 05:45:31.895165 1429857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1209 05:45:31.914099 1429857 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:31.914132 1429857 retry.go:31] will retry after 1.693137355s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 05:45:31.975352 1429857 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:31.975398 1429857 retry.go:31] will retry after 3.456836552s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:33.078099 1429857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1209 05:45:33.136837 1429857 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:33.136870 1429857 retry.go:31] will retry after 2.044494816s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:33.607514 1429857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1209 05:45:33.665951 1429857 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:33.665984 1429857 retry.go:31] will retry after 3.185980177s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 05:45:33.684482 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	I1209 05:45:35.181952 1429857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1209 05:45:35.257647 1429857 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:35.257683 1429857 retry.go:31] will retry after 6.247119086s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:35.432935 1429857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1209 05:45:35.489710 1429857 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:35.489747 1429857 retry.go:31] will retry after 5.005761894s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 05:45:36.183633 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	I1209 05:45:36.853121 1429857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1209 05:45:36.910093 1429857 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:36.910124 1429857 retry.go:31] will retry after 2.260143685s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 05:45:38.684059 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	I1209 05:45:39.170535 1429857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1209 05:45:39.232801 1429857 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:39.232833 1429857 retry.go:31] will retry after 5.898281664s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:40.496123 1429857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1209 05:45:40.569501 1429857 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:40.569541 1429857 retry.go:31] will retry after 5.242247905s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 05:45:41.183647 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	I1209 05:45:41.505063 1429857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1209 05:45:41.569422 1429857 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:41.569455 1429857 retry.go:31] will retry after 4.503235869s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 05:45:43.683557 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	I1209 05:45:45.132421 1429857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1209 05:45:45.222154 1429857 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:45.222198 1429857 retry.go:31] will retry after 8.250619683s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 05:45:45.684059 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	I1209 05:45:45.812550 1429857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1209 05:45:45.881229 1429857 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:45.881261 1429857 retry.go:31] will retry after 8.251153137s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:46.073504 1429857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1209 05:45:46.133618 1429857 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:46.133649 1429857 retry.go:31] will retry after 8.692623616s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 05:45:47.684156 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:45:49.684420 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:45:52.184266 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	I1209 05:45:53.473746 1429857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1209 05:45:53.531667 1429857 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:53.531698 1429857 retry.go:31] will retry after 15.506930845s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:54.132979 1429857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1209 05:45:54.193191 1429857 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:54.193224 1429857 retry.go:31] will retry after 10.284746977s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 05:45:54.683975 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	I1209 05:45:54.827471 1429857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1209 05:45:54.889500 1429857 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:54.889532 1429857 retry.go:31] will retry after 18.446693624s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 05:45:56.684069 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:45:58.684566 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:46:01.183568 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:46:03.184493 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	I1209 05:46:04.478972 1429857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1209 05:46:04.538725 1429857 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:46:04.538770 1429857 retry.go:31] will retry after 23.738719196s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 05:46:05.684477 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:46:08.183831 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	I1209 05:46:09.038866 1429857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1209 05:46:09.093686 1429857 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:46:09.093718 1429857 retry.go:31] will retry after 26.248517502s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 05:46:10.184557 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:46:12.684037 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	I1209 05:46:13.337395 1429857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1209 05:46:13.395453 1429857 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:46:13.395482 1429857 retry.go:31] will retry after 20.604537862s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 05:46:14.684146 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:46:17.183632 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:46:19.184010 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:46:21.184436 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:46:23.684454 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:46:26.184391 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	I1209 05:46:28.277860 1429857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1209 05:46:28.338442 1429857 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:46:28.338477 1429857 retry.go:31] will retry after 19.859111094s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 05:46:28.684457 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:46:31.184269 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:46:33.683555 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	I1209 05:46:34.001016 1429857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1209 05:46:34.063987 1429857 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:46:34.064033 1429857 retry.go:31] will retry after 28.707309643s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:46:35.342890 1429857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1209 05:46:35.399451 1429857 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:46:35.399484 1429857 retry.go:31] will retry after 27.272034746s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 05:46:35.684281 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:46:38.184278 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:46:40.684368 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:46:42.684576 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:46:45.183976 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:46:47.683573 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	I1209 05:46:48.197871 1429857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1209 05:46:48.261674 1429857 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 05:46:48.261785 1429857 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1209 05:46:49.683828 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:46:51.684493 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:46:54.184206 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:46:56.683584 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:46:58.684546 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:47:01.184173 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	I1209 05:47:02.671757 1429857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1209 05:47:02.753978 1429857 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 05:47:02.754067 1429857 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1209 05:47:02.772232 1429857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1209 05:47:02.829567 1429857 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 05:47:02.829669 1429857 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1209 05:47:02.832581 1429857 out.go:179] * Enabled addons: 
	I1209 05:47:02.835625 1429857 addons.go:530] duration metric: took 1m36.002308157s for enable addons: enabled=[]
	W1209 05:47:03.184442 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:47:05.684309 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:47:07.684428 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:47:09.684532 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:47:12.184139 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:47:14.184410 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:47:16.684288 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:47:19.183449 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:47:21.184166 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:47:23.683505 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:47:25.684390 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:47:28.184325 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:47:30.184488 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:47:32.683874 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:47:34.684542 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:47:37.184070 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:47:39.684409 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:47:42.184768 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:47:44.684202 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:47:46.684646 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:47:49.184114 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:47:51.184388 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:47:53.683596 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:47:55.684345 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:47:58.184278 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:48:00.684415 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:48:03.184342 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:48:05.683500 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:48:07.684429 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:48:10.184405 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:48:12.684229 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:48:15.184267 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:48:17.684049 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:48:19.684520 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:48:22.183499 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:48:24.184521 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:48:26.684110 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:48:28.684278 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:48:31.184230 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:48:33.184335 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:48:35.184598 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:48:37.683518 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:48:40.183481 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:48:42.183843 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:48:44.184261 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:48:46.684428 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:48:49.184195 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:48:51.184658 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:48:53.683520 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:48:55.684601 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:48:58.184091 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:49:00.184450 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:49:02.684261 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:49:04.684331 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:49:06.684478 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:49:09.184095 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:49:11.184292 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:49:13.684249 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:49:16.184284 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:49:18.184331 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:49:20.684244 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:49:22.684516 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:49:25.184217 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:49:27.684547 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:49:30.183694 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:49:32.184203 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:49:34.684399 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:49:37.184427 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:49:39.684508 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:49:42.184491 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:49:44.684308 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:49:47.183570 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:49:49.184295 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:49:51.184522 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:49:53.684098 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:49:55.684468 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:49:58.184464 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:50:00.184677 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:50:02.683529 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:50:04.684429 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:50:07.184098 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:50:09.184594 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:50:11.684109 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:50:13.684235 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:50:15.684441 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:50:18.184274 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:50:20.184328 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:50:22.684389 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:50:25.183523 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:50:27.184424 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:50:29.684391 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:50:32.183586 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:50:34.184266 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:50:36.184451 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:50:38.684140 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:50:41.184274 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:50:43.184338 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:50:45.187180 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:50:47.684513 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:50:50.184270 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:50:52.684275 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:50:54.684397 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:50:57.184110 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:50:59.184561 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:51:01.683581 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:51:04.183570 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:51:06.183630 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:51:08.683544 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:51:10.683655 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:51:12.684274 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:51:15.183508 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:51:17.183575 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:51:19.184498 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:51:21.683564 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:51:23.683626 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:51:26.184579 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	I1209 05:51:27.683214 1429857 node_ready.go:38] duration metric: took 6m0.000146062s for node "no-preload-842269" to be "Ready" ...
	I1209 05:51:27.686512 1429857 out.go:203] 
	W1209 05:51:27.689522 1429857 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1209 05:51:27.689540 1429857 out.go:285] * 
	* 
	W1209 05:51:27.691657 1429857 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 05:51:27.694499 1429857 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p no-preload-842269 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-842269
helpers_test.go:243: (dbg) docker inspect no-preload-842269:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9789b34a5453b154c4ceca4f0038c1d7948d7f0f72f334a114a8452d803ad415",
	        "Created": "2025-12-09T05:35:10.617601088Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1429985,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-09T05:45:19.572205739Z",
	            "FinishedAt": "2025-12-09T05:45:18.233836564Z"
	        },
	        "Image": "sha256:e4eb91ed18a24161fce60c7cdd660144ecd5b8c5029dc2dea2c5e423c2f48ce4",
	        "ResolvConfPath": "/var/lib/docker/containers/9789b34a5453b154c4ceca4f0038c1d7948d7f0f72f334a114a8452d803ad415/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9789b34a5453b154c4ceca4f0038c1d7948d7f0f72f334a114a8452d803ad415/hostname",
	        "HostsPath": "/var/lib/docker/containers/9789b34a5453b154c4ceca4f0038c1d7948d7f0f72f334a114a8452d803ad415/hosts",
	        "LogPath": "/var/lib/docker/containers/9789b34a5453b154c4ceca4f0038c1d7948d7f0f72f334a114a8452d803ad415/9789b34a5453b154c4ceca4f0038c1d7948d7f0f72f334a114a8452d803ad415-json.log",
	        "Name": "/no-preload-842269",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-842269:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-842269",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "9789b34a5453b154c4ceca4f0038c1d7948d7f0f72f334a114a8452d803ad415",
	                "LowerDir": "/var/lib/docker/overlay2/658f95b1f330fcb0d4774e9611c50218d409d0a889583e0050325b4fe479e9f7-init/diff:/var/lib/docker/overlay2/c44bb57aa59cc265266f37f2bb6e7ec0e7d641c3b4aeaa57e6d23deec6f0d1d4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/658f95b1f330fcb0d4774e9611c50218d409d0a889583e0050325b4fe479e9f7/merged",
	                "UpperDir": "/var/lib/docker/overlay2/658f95b1f330fcb0d4774e9611c50218d409d0a889583e0050325b4fe479e9f7/diff",
	                "WorkDir": "/var/lib/docker/overlay2/658f95b1f330fcb0d4774e9611c50218d409d0a889583e0050325b4fe479e9f7/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-842269",
	                "Source": "/var/lib/docker/volumes/no-preload-842269/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-842269",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-842269",
	                "name.minikube.sigs.k8s.io": "no-preload-842269",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7fcd619b0c6697c145e92186b02d3f8b52fc0617bc693eecdb3992bd01dd5379",
	            "SandboxKey": "/var/run/docker/netns/7fcd619b0c66",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34210"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34211"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34214"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34212"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34213"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-842269": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "6e:db:fc:0d:87:5a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6461bd7226e5723487f325bf78054dc63f1dafa2831abe7b44a8cc288dfa4456",
	                    "EndpointID": "26ea729d3df39a6ce095a6c0877cc7989e68004132accb6fb25a8d1686357af6",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-842269",
	                        "9789b34a5453"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-842269 -n no-preload-842269
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-842269 -n no-preload-842269: exit status 2 (323.128261ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-842269 logs -n 25
helpers_test.go:260: TestStartStop/group/no-preload/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─
────────────────────┐
	│ COMMAND │                                                                                                                            ARGS                                                                                                                            │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─
────────────────────┤
	│ start   │ -p embed-certs-432108 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-432108           │ jenkins │ v1.37.0 │ 09 Dec 25 05:36 UTC │ 09 Dec 25 05:37 UTC │
	│ image   │ embed-certs-432108 image list --format=json                                                                                                                                                                                                                │ embed-certs-432108           │ jenkins │ v1.37.0 │ 09 Dec 25 05:37 UTC │ 09 Dec 25 05:37 UTC │
	│ pause   │ -p embed-certs-432108 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-432108           │ jenkins │ v1.37.0 │ 09 Dec 25 05:37 UTC │ 09 Dec 25 05:37 UTC │
	│ unpause │ -p embed-certs-432108 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-432108           │ jenkins │ v1.37.0 │ 09 Dec 25 05:37 UTC │ 09 Dec 25 05:37 UTC │
	│ delete  │ -p embed-certs-432108                                                                                                                                                                                                                                      │ embed-certs-432108           │ jenkins │ v1.37.0 │ 09 Dec 25 05:37 UTC │ 09 Dec 25 05:37 UTC │
	│ delete  │ -p embed-certs-432108                                                                                                                                                                                                                                      │ embed-certs-432108           │ jenkins │ v1.37.0 │ 09 Dec 25 05:37 UTC │ 09 Dec 25 05:37 UTC │
	│ start   │ -p default-k8s-diff-port-564611 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-564611 │ jenkins │ v1.37.0 │ 09 Dec 25 05:37 UTC │ 09 Dec 25 05:39 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-564611 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                         │ default-k8s-diff-port-564611 │ jenkins │ v1.37.0 │ 09 Dec 25 05:39 UTC │ 09 Dec 25 05:39 UTC │
	│ stop    │ -p default-k8s-diff-port-564611 --alsologtostderr -v=3                                                                                                                                                                                                     │ default-k8s-diff-port-564611 │ jenkins │ v1.37.0 │ 09 Dec 25 05:39 UTC │ 09 Dec 25 05:39 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-564611 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ default-k8s-diff-port-564611 │ jenkins │ v1.37.0 │ 09 Dec 25 05:39 UTC │ 09 Dec 25 05:39 UTC │
	│ start   │ -p default-k8s-diff-port-564611 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-564611 │ jenkins │ v1.37.0 │ 09 Dec 25 05:39 UTC │ 09 Dec 25 05:40 UTC │
	│ image   │ default-k8s-diff-port-564611 image list --format=json                                                                                                                                                                                                      │ default-k8s-diff-port-564611 │ jenkins │ v1.37.0 │ 09 Dec 25 05:40 UTC │ 09 Dec 25 05:40 UTC │
	│ pause   │ -p default-k8s-diff-port-564611 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-564611 │ jenkins │ v1.37.0 │ 09 Dec 25 05:40 UTC │ 09 Dec 25 05:40 UTC │
	│ unpause │ -p default-k8s-diff-port-564611 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-564611 │ jenkins │ v1.37.0 │ 09 Dec 25 05:40 UTC │ 09 Dec 25 05:40 UTC │
	│ delete  │ -p default-k8s-diff-port-564611                                                                                                                                                                                                                            │ default-k8s-diff-port-564611 │ jenkins │ v1.37.0 │ 09 Dec 25 05:40 UTC │ 09 Dec 25 05:40 UTC │
	│ delete  │ -p default-k8s-diff-port-564611                                                                                                                                                                                                                            │ default-k8s-diff-port-564611 │ jenkins │ v1.37.0 │ 09 Dec 25 05:40 UTC │ 09 Dec 25 05:40 UTC │
	│ start   │ -p newest-cni-262540 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-262540            │ jenkins │ v1.37.0 │ 09 Dec 25 05:40 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-842269 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ no-preload-842269            │ jenkins │ v1.37.0 │ 09 Dec 25 05:43 UTC │                     │
	│ stop    │ -p no-preload-842269 --alsologtostderr -v=3                                                                                                                                                                                                                │ no-preload-842269            │ jenkins │ v1.37.0 │ 09 Dec 25 05:45 UTC │ 09 Dec 25 05:45 UTC │
	│ addons  │ enable dashboard -p no-preload-842269 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                               │ no-preload-842269            │ jenkins │ v1.37.0 │ 09 Dec 25 05:45 UTC │ 09 Dec 25 05:45 UTC │
	│ start   │ -p no-preload-842269 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-842269            │ jenkins │ v1.37.0 │ 09 Dec 25 05:45 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-262540 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ newest-cni-262540            │ jenkins │ v1.37.0 │ 09 Dec 25 05:49 UTC │                     │
	│ stop    │ -p newest-cni-262540 --alsologtostderr -v=3                                                                                                                                                                                                                │ newest-cni-262540            │ jenkins │ v1.37.0 │ 09 Dec 25 05:50 UTC │ 09 Dec 25 05:50 UTC │
	│ addons  │ enable dashboard -p newest-cni-262540 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                               │ newest-cni-262540            │ jenkins │ v1.37.0 │ 09 Dec 25 05:50 UTC │ 09 Dec 25 05:50 UTC │
	│ start   │ -p newest-cni-262540 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-262540            │ jenkins │ v1.37.0 │ 09 Dec 25 05:50 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─
────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/09 05:50:48
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 05:50:48.368732 1437114 out.go:360] Setting OutFile to fd 1 ...
	I1209 05:50:48.368913 1437114 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 05:50:48.368940 1437114 out.go:374] Setting ErrFile to fd 2...
	I1209 05:50:48.368958 1437114 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 05:50:48.369216 1437114 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-1142328/.minikube/bin
	I1209 05:50:48.369601 1437114 out.go:368] Setting JSON to false
	I1209 05:50:48.370536 1437114 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":30772,"bootTime":1765228677,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1209 05:50:48.370622 1437114 start.go:143] virtualization:  
	I1209 05:50:48.373806 1437114 out.go:179] * [newest-cni-262540] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1209 05:50:48.377517 1437114 out.go:179]   - MINIKUBE_LOCATION=22081
	I1209 05:50:48.377579 1437114 notify.go:221] Checking for updates...
	I1209 05:50:48.383314 1437114 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 05:50:48.386284 1437114 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22081-1142328/kubeconfig
	I1209 05:50:48.389132 1437114 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-1142328/.minikube
	I1209 05:50:48.392076 1437114 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1209 05:50:48.394975 1437114 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 05:50:48.398361 1437114 config.go:182] Loaded profile config "newest-cni-262540": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1209 05:50:48.398977 1437114 driver.go:422] Setting default libvirt URI to qemu:///system
	I1209 05:50:48.429565 1437114 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1209 05:50:48.429674 1437114 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 05:50:48.493190 1437114 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-09 05:50:48.483865172 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1209 05:50:48.493298 1437114 docker.go:319] overlay module found
	I1209 05:50:48.496461 1437114 out.go:179] * Using the docker driver based on existing profile
	I1209 05:50:48.499256 1437114 start.go:309] selected driver: docker
	I1209 05:50:48.499276 1437114 start.go:927] validating driver "docker" against &{Name:newest-cni-262540 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-262540 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString
: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 05:50:48.499393 1437114 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 05:50:48.500188 1437114 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 05:50:48.552839 1437114 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-09 05:50:48.544121972 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1209 05:50:48.553181 1437114 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1209 05:50:48.553214 1437114 cni.go:84] Creating CNI manager for ""
	I1209 05:50:48.553271 1437114 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1209 05:50:48.553312 1437114 start.go:353] cluster config:
	{Name:newest-cni-262540 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-262540 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 05:50:48.558270 1437114 out.go:179] * Starting "newest-cni-262540" primary control-plane node in "newest-cni-262540" cluster
	I1209 05:50:48.560987 1437114 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1209 05:50:48.563913 1437114 out.go:179] * Pulling base image v0.0.48-1765184860-22066 ...
	I1209 05:50:48.566628 1437114 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1209 05:50:48.566677 1437114 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1209 05:50:48.566701 1437114 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local docker daemon
	I1209 05:50:48.566709 1437114 cache.go:65] Caching tarball of preloaded images
	I1209 05:50:48.566793 1437114 preload.go:238] Found /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1209 05:50:48.566803 1437114 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1209 05:50:48.566914 1437114 profile.go:143] Saving config to /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/config.json ...
	I1209 05:50:48.585366 1437114 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local docker daemon, skipping pull
	I1209 05:50:48.585390 1437114 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c exists in daemon, skipping load
	I1209 05:50:48.585410 1437114 cache.go:243] Successfully downloaded all kic artifacts
	I1209 05:50:48.585447 1437114 start.go:360] acquireMachinesLock for newest-cni-262540: {Name:mk272d84ff1bc8c8949f2f0b1f608a7519899d10 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 05:50:48.585504 1437114 start.go:364] duration metric: took 35.806µs to acquireMachinesLock for "newest-cni-262540"
	I1209 05:50:48.585529 1437114 start.go:96] Skipping create...Using existing machine configuration
	I1209 05:50:48.585539 1437114 fix.go:54] fixHost starting: 
	I1209 05:50:48.585799 1437114 cli_runner.go:164] Run: docker container inspect newest-cni-262540 --format={{.State.Status}}
	I1209 05:50:48.601614 1437114 fix.go:112] recreateIfNeeded on newest-cni-262540: state=Stopped err=<nil>
	W1209 05:50:48.601645 1437114 fix.go:138] unexpected machine state, will restart: <nil>
	W1209 05:50:45.187180 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:50:47.684513 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	I1209 05:50:48.604910 1437114 out.go:252] * Restarting existing docker container for "newest-cni-262540" ...
	I1209 05:50:48.604997 1437114 cli_runner.go:164] Run: docker start newest-cni-262540
	I1209 05:50:48.871934 1437114 cli_runner.go:164] Run: docker container inspect newest-cni-262540 --format={{.State.Status}}
	I1209 05:50:48.896820 1437114 kic.go:430] container "newest-cni-262540" state is running.
	I1209 05:50:48.898586 1437114 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-262540
	I1209 05:50:48.919622 1437114 profile.go:143] Saving config to /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/config.json ...
	I1209 05:50:48.919952 1437114 machine.go:94] provisionDockerMachine start ...
	I1209 05:50:48.920090 1437114 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-262540
	I1209 05:50:48.944382 1437114 main.go:143] libmachine: Using SSH client type: native
	I1209 05:50:48.944721 1437114 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 34215 <nil> <nil>}
	I1209 05:50:48.944730 1437114 main.go:143] libmachine: About to run SSH command:
	hostname
	I1209 05:50:48.945423 1437114 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:54144->127.0.0.1:34215: read: connection reset by peer
	I1209 05:50:52.103931 1437114 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-262540
	
	I1209 05:50:52.103958 1437114 ubuntu.go:182] provisioning hostname "newest-cni-262540"
	I1209 05:50:52.104072 1437114 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-262540
	I1209 05:50:52.121462 1437114 main.go:143] libmachine: Using SSH client type: native
	I1209 05:50:52.121778 1437114 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 34215 <nil> <nil>}
	I1209 05:50:52.121795 1437114 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-262540 && echo "newest-cni-262540" | sudo tee /etc/hostname
	I1209 05:50:52.280621 1437114 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-262540
	
	I1209 05:50:52.280705 1437114 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-262540
	I1209 05:50:52.301681 1437114 main.go:143] libmachine: Using SSH client type: native
	I1209 05:50:52.301997 1437114 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 34215 <nil> <nil>}
	I1209 05:50:52.302019 1437114 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-262540' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-262540/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-262540' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 05:50:52.452274 1437114 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1209 05:50:52.452304 1437114 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22081-1142328/.minikube CaCertPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22081-1142328/.minikube}
	I1209 05:50:52.452324 1437114 ubuntu.go:190] setting up certificates
	I1209 05:50:52.452332 1437114 provision.go:84] configureAuth start
	I1209 05:50:52.452391 1437114 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-262540
	I1209 05:50:52.475825 1437114 provision.go:143] copyHostCerts
	I1209 05:50:52.475907 1437114 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.pem, removing ...
	I1209 05:50:52.475921 1437114 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.pem
	I1209 05:50:52.475999 1437114 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.pem (1078 bytes)
	I1209 05:50:52.476136 1437114 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-1142328/.minikube/cert.pem, removing ...
	I1209 05:50:52.476147 1437114 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-1142328/.minikube/cert.pem
	I1209 05:50:52.476175 1437114 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22081-1142328/.minikube/cert.pem (1123 bytes)
	I1209 05:50:52.476288 1437114 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-1142328/.minikube/key.pem, removing ...
	I1209 05:50:52.476322 1437114 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-1142328/.minikube/key.pem
	I1209 05:50:52.476364 1437114 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22081-1142328/.minikube/key.pem (1675 bytes)
	I1209 05:50:52.476440 1437114 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca-key.pem org=jenkins.newest-cni-262540 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-262540]
	I1209 05:50:52.561012 1437114 provision.go:177] copyRemoteCerts
	I1209 05:50:52.561084 1437114 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 05:50:52.561133 1437114 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-262540
	I1209 05:50:52.578674 1437114 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34215 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/newest-cni-262540/id_rsa Username:docker}
	I1209 05:50:52.685758 1437114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1209 05:50:52.702408 1437114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1209 05:50:52.719173 1437114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1671 bytes)
	I1209 05:50:52.736435 1437114 provision.go:87] duration metric: took 284.081054ms to configureAuth
	I1209 05:50:52.736462 1437114 ubuntu.go:206] setting minikube options for container-runtime
	I1209 05:50:52.736672 1437114 config.go:182] Loaded profile config "newest-cni-262540": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1209 05:50:52.736698 1437114 machine.go:97] duration metric: took 3.816733312s to provisionDockerMachine
	I1209 05:50:52.736707 1437114 start.go:293] postStartSetup for "newest-cni-262540" (driver="docker")
	I1209 05:50:52.736719 1437114 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 05:50:52.736771 1437114 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 05:50:52.736819 1437114 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-262540
	I1209 05:50:52.753733 1437114 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34215 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/newest-cni-262540/id_rsa Username:docker}
	I1209 05:50:52.859644 1437114 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 05:50:52.862806 1437114 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1209 05:50:52.862830 1437114 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1209 05:50:52.862841 1437114 filesync.go:126] Scanning /home/jenkins/minikube-integration/22081-1142328/.minikube/addons for local assets ...
	I1209 05:50:52.862893 1437114 filesync.go:126] Scanning /home/jenkins/minikube-integration/22081-1142328/.minikube/files for local assets ...
	I1209 05:50:52.862974 1437114 filesync.go:149] local asset: /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/ssl/certs/11442312.pem -> 11442312.pem in /etc/ssl/certs
	I1209 05:50:52.863076 1437114 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 05:50:52.870063 1437114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/ssl/certs/11442312.pem --> /etc/ssl/certs/11442312.pem (1708 bytes)
	I1209 05:50:52.886852 1437114 start.go:296] duration metric: took 150.129481ms for postStartSetup
	I1209 05:50:52.886932 1437114 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1209 05:50:52.887020 1437114 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-262540
	I1209 05:50:52.904086 1437114 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34215 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/newest-cni-262540/id_rsa Username:docker}
	I1209 05:50:53.006063 1437114 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1209 05:50:53.011716 1437114 fix.go:56] duration metric: took 4.426170276s for fixHost
	I1209 05:50:53.011745 1437114 start.go:83] releasing machines lock for "newest-cni-262540", held for 4.426228294s
	I1209 05:50:53.011812 1437114 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-262540
	I1209 05:50:53.028468 1437114 ssh_runner.go:195] Run: cat /version.json
	I1209 05:50:53.028532 1437114 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-262540
	I1209 05:50:53.028815 1437114 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 05:50:53.028886 1437114 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-262540
	I1209 05:50:53.050698 1437114 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34215 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/newest-cni-262540/id_rsa Username:docker}
	I1209 05:50:53.061651 1437114 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34215 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/newest-cni-262540/id_rsa Username:docker}
	I1209 05:50:53.151708 1437114 ssh_runner.go:195] Run: systemctl --version
	I1209 05:50:53.249572 1437114 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 05:50:53.254184 1437114 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 05:50:53.254256 1437114 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 05:50:53.261725 1437114 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1209 05:50:53.261749 1437114 start.go:496] detecting cgroup driver to use...
	I1209 05:50:53.261780 1437114 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1209 05:50:53.261828 1437114 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1209 05:50:53.278531 1437114 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1209 05:50:53.291190 1437114 docker.go:218] disabling cri-docker service (if available) ...
	I1209 05:50:53.291252 1437114 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 05:50:53.306525 1437114 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 05:50:53.319477 1437114 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 05:50:53.424347 1437114 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 05:50:53.539911 1437114 docker.go:234] disabling docker service ...
	I1209 05:50:53.540005 1437114 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 05:50:53.555506 1437114 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 05:50:53.568379 1437114 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 05:50:53.684143 1437114 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 05:50:53.819865 1437114 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 05:50:53.834400 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 05:50:53.848555 1437114 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1209 05:50:53.857346 1437114 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1209 05:50:53.866232 1437114 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1209 05:50:53.866362 1437114 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1209 05:50:53.875141 1437114 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1209 05:50:53.883775 1437114 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1209 05:50:53.892743 1437114 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1209 05:50:53.901606 1437114 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 05:50:53.909694 1437114 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1209 05:50:53.918469 1437114 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1209 05:50:53.927272 1437114 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1209 05:50:53.939275 1437114 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 05:50:53.948029 1437114 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 05:50:53.956257 1437114 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 05:50:54.075166 1437114 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1209 05:50:54.195479 1437114 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1209 05:50:54.195546 1437114 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1209 05:50:54.199412 1437114 start.go:564] Will wait 60s for crictl version
	I1209 05:50:54.199478 1437114 ssh_runner.go:195] Run: which crictl
	I1209 05:50:54.203349 1437114 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1209 05:50:54.229036 1437114 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1209 05:50:54.229147 1437114 ssh_runner.go:195] Run: containerd --version
	I1209 05:50:54.257755 1437114 ssh_runner.go:195] Run: containerd --version
	I1209 05:50:54.281890 1437114 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	W1209 05:50:50.184270 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:50:52.684275 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	I1209 05:50:54.284780 1437114 cli_runner.go:164] Run: docker network inspect newest-cni-262540 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1209 05:50:54.300458 1437114 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1209 05:50:54.304227 1437114 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 05:50:54.316829 1437114 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1209 05:50:54.319602 1437114 kubeadm.go:884] updating cluster {Name:newest-cni-262540 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-262540 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 05:50:54.319761 1437114 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1209 05:50:54.319850 1437114 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 05:50:54.344882 1437114 containerd.go:627] all images are preloaded for containerd runtime.
	I1209 05:50:54.344907 1437114 containerd.go:534] Images already preloaded, skipping extraction
	I1209 05:50:54.344969 1437114 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 05:50:54.368351 1437114 containerd.go:627] all images are preloaded for containerd runtime.
	I1209 05:50:54.368375 1437114 cache_images.go:86] Images are preloaded, skipping loading
	I1209 05:50:54.368384 1437114 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1209 05:50:54.368487 1437114 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-262540 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-262540 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 05:50:54.368554 1437114 ssh_runner.go:195] Run: sudo crictl info
	I1209 05:50:54.396480 1437114 cni.go:84] Creating CNI manager for ""
	I1209 05:50:54.396505 1437114 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1209 05:50:54.396527 1437114 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1209 05:50:54.396551 1437114 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-262540 NodeName:newest-cni-262540 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1209 05:50:54.396668 1437114 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-262540"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 05:50:54.396755 1437114 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1209 05:50:54.404357 1437114 binaries.go:51] Found k8s binaries, skipping transfer
	I1209 05:50:54.404462 1437114 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1209 05:50:54.411829 1437114 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1209 05:50:54.423915 1437114 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1209 05:50:54.436484 1437114 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1209 05:50:54.448905 1437114 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1209 05:50:54.452398 1437114 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 05:50:54.461840 1437114 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 05:50:54.574379 1437114 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 05:50:54.590263 1437114 certs.go:69] Setting up /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540 for IP: 192.168.76.2
	I1209 05:50:54.590332 1437114 certs.go:195] generating shared ca certs ...
	I1209 05:50:54.590364 1437114 certs.go:227] acquiring lock for ca certs: {Name:mk15788702f8c4e23b5aeab3f44961d296fab259 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 05:50:54.590561 1437114 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.key
	I1209 05:50:54.590652 1437114 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/proxy-client-ca.key
	I1209 05:50:54.590688 1437114 certs.go:257] generating profile certs ...
	I1209 05:50:54.590838 1437114 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/client.key
	I1209 05:50:54.590942 1437114 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/apiserver.key.0ed49b31
	I1209 05:50:54.591051 1437114 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/proxy-client.key
	I1209 05:50:54.591210 1437114 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/1144231.pem (1338 bytes)
	W1209 05:50:54.591287 1437114 certs.go:480] ignoring /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/1144231_empty.pem, impossibly tiny 0 bytes
	I1209 05:50:54.591314 1437114 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca-key.pem (1679 bytes)
	I1209 05:50:54.591380 1437114 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem (1078 bytes)
	I1209 05:50:54.591442 1437114 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/cert.pem (1123 bytes)
	I1209 05:50:54.591490 1437114 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/key.pem (1675 bytes)
	I1209 05:50:54.591576 1437114 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/ssl/certs/11442312.pem (1708 bytes)
	I1209 05:50:54.592436 1437114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 05:50:54.617399 1437114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1209 05:50:54.636943 1437114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 05:50:54.658494 1437114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1209 05:50:54.674958 1437114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1209 05:50:54.701134 1437114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1209 05:50:54.720347 1437114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 05:50:54.738904 1437114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1209 05:50:54.758253 1437114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/1144231.pem --> /usr/share/ca-certificates/1144231.pem (1338 bytes)
	I1209 05:50:54.775204 1437114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/ssl/certs/11442312.pem --> /usr/share/ca-certificates/11442312.pem (1708 bytes)
	I1209 05:50:54.791963 1437114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 05:50:54.809403 1437114 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 05:50:54.821958 1437114 ssh_runner.go:195] Run: openssl version
	I1209 05:50:54.828113 1437114 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1209 05:50:54.835305 1437114 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1209 05:50:54.842458 1437114 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 05:50:54.846155 1437114 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 04:09 /usr/share/ca-certificates/minikubeCA.pem
	I1209 05:50:54.846222 1437114 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 05:50:54.887330 1437114 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1209 05:50:54.894630 1437114 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1144231.pem
	I1209 05:50:54.901722 1437114 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1144231.pem /etc/ssl/certs/1144231.pem
	I1209 05:50:54.909025 1437114 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1144231.pem
	I1209 05:50:54.912514 1437114 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 04:18 /usr/share/ca-certificates/1144231.pem
	I1209 05:50:54.912621 1437114 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1144231.pem
	I1209 05:50:54.953649 1437114 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1209 05:50:54.960781 1437114 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/11442312.pem
	I1209 05:50:54.967822 1437114 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/11442312.pem /etc/ssl/certs/11442312.pem
	I1209 05:50:54.975177 1437114 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11442312.pem
	I1209 05:50:54.978699 1437114 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 04:18 /usr/share/ca-certificates/11442312.pem
	I1209 05:50:54.978782 1437114 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11442312.pem
	I1209 05:50:55.020640 1437114 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1209 05:50:55.034989 1437114 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 05:50:55.043885 1437114 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1209 05:50:55.090059 1437114 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1209 05:50:55.134954 1437114 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1209 05:50:55.180095 1437114 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1209 05:50:55.223090 1437114 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1209 05:50:55.265103 1437114 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1209 05:50:55.306238 1437114 kubeadm.go:401] StartCluster: {Name:newest-cni-262540 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-262540 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 05:50:55.306348 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1209 05:50:55.306413 1437114 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 05:50:55.335032 1437114 cri.go:89] found id: ""
	I1209 05:50:55.335115 1437114 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1209 05:50:55.355619 1437114 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1209 05:50:55.355640 1437114 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1209 05:50:55.355691 1437114 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1209 05:50:55.363844 1437114 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1209 05:50:55.364433 1437114 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-262540" does not appear in /home/jenkins/minikube-integration/22081-1142328/kubeconfig
	I1209 05:50:55.364754 1437114 kubeconfig.go:62] /home/jenkins/minikube-integration/22081-1142328/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-262540" cluster setting kubeconfig missing "newest-cni-262540" context setting]
	I1209 05:50:55.365251 1437114 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1142328/kubeconfig: {Name:mk0dc127429f88dc4fdfb2d110deebc58207c1b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 05:50:55.366765 1437114 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1209 05:50:55.375221 1437114 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1209 05:50:55.375252 1437114 kubeadm.go:602] duration metric: took 19.605753ms to restartPrimaryControlPlane
	I1209 05:50:55.375261 1437114 kubeadm.go:403] duration metric: took 69.033781ms to StartCluster
	I1209 05:50:55.375276 1437114 settings.go:142] acquiring lock: {Name:mk8fa744e3d74bf8a1cbf5ac275c9f1969ad91a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 05:50:55.375345 1437114 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22081-1142328/kubeconfig
	I1209 05:50:55.376265 1437114 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1142328/kubeconfig: {Name:mk0dc127429f88dc4fdfb2d110deebc58207c1b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 05:50:55.376705 1437114 config.go:182] Loaded profile config "newest-cni-262540": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1209 05:50:55.376504 1437114 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1209 05:50:55.376810 1437114 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1209 05:50:55.377093 1437114 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-262540"
	I1209 05:50:55.377111 1437114 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-262540"
	I1209 05:50:55.377136 1437114 host.go:66] Checking if "newest-cni-262540" exists ...
	I1209 05:50:55.377594 1437114 cli_runner.go:164] Run: docker container inspect newest-cni-262540 --format={{.State.Status}}
	I1209 05:50:55.377785 1437114 addons.go:70] Setting dashboard=true in profile "newest-cni-262540"
	I1209 05:50:55.377813 1437114 addons.go:239] Setting addon dashboard=true in "newest-cni-262540"
	W1209 05:50:55.377825 1437114 addons.go:248] addon dashboard should already be in state true
	I1209 05:50:55.377849 1437114 host.go:66] Checking if "newest-cni-262540" exists ...
	I1209 05:50:55.378304 1437114 cli_runner.go:164] Run: docker container inspect newest-cni-262540 --format={{.State.Status}}
	I1209 05:50:55.378820 1437114 addons.go:70] Setting default-storageclass=true in profile "newest-cni-262540"
	I1209 05:50:55.378864 1437114 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-262540"
	I1209 05:50:55.379212 1437114 cli_runner.go:164] Run: docker container inspect newest-cni-262540 --format={{.State.Status}}
	I1209 05:50:55.381896 1437114 out.go:179] * Verifying Kubernetes components...
	I1209 05:50:55.388614 1437114 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 05:50:55.438264 1437114 addons.go:239] Setting addon default-storageclass=true in "newest-cni-262540"
	I1209 05:50:55.438303 1437114 host.go:66] Checking if "newest-cni-262540" exists ...
	I1209 05:50:55.438728 1437114 cli_runner.go:164] Run: docker container inspect newest-cni-262540 --format={{.State.Status}}
	I1209 05:50:55.440785 1437114 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 05:50:55.442715 1437114 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 05:50:55.442743 1437114 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1209 05:50:55.442806 1437114 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-262540
	I1209 05:50:55.442947 1437114 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1209 05:50:55.445621 1437114 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1209 05:50:55.449877 1437114 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1209 05:50:55.449904 1437114 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1209 05:50:55.449976 1437114 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-262540
	I1209 05:50:55.481759 1437114 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34215 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/newest-cni-262540/id_rsa Username:docker}
	I1209 05:50:55.496417 1437114 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1209 05:50:55.496440 1437114 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1209 05:50:55.496499 1437114 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-262540
	I1209 05:50:55.515362 1437114 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34215 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/newest-cni-262540/id_rsa Username:docker}
	I1209 05:50:55.537402 1437114 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34215 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/newest-cni-262540/id_rsa Username:docker}
	I1209 05:50:55.642792 1437114 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 05:50:55.677774 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 05:50:55.711653 1437114 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1209 05:50:55.711691 1437114 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1209 05:50:55.713691 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1209 05:50:55.771340 1437114 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1209 05:50:55.771368 1437114 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1209 05:50:55.785331 1437114 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1209 05:50:55.785403 1437114 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1209 05:50:55.798961 1437114 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1209 05:50:55.798984 1437114 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1209 05:50:55.811558 1437114 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1209 05:50:55.811625 1437114 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1209 05:50:55.824010 1437114 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1209 05:50:55.824113 1437114 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1209 05:50:55.836722 1437114 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1209 05:50:55.836745 1437114 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1209 05:50:55.849061 1437114 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1209 05:50:55.849126 1437114 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1209 05:50:55.862091 1437114 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1209 05:50:55.862114 1437114 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1209 05:50:55.875010 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1209 05:50:56.435552 1437114 api_server.go:52] waiting for apiserver process to appear ...
	W1209 05:50:56.435748 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:50:56.435801 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:50:56.435838 1437114 retry.go:31] will retry after 228.095144ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 05:50:56.435700 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:50:56.435898 1437114 retry.go:31] will retry after 361.053359ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 05:50:56.436142 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:50:56.436189 1437114 retry.go:31] will retry after 212.683869ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:50:56.649580 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1209 05:50:56.665010 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1209 05:50:56.729564 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:50:56.729662 1437114 retry.go:31] will retry after 263.201205ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 05:50:56.751560 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:50:56.751590 1437114 retry.go:31] will retry after 282.08987ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:50:56.797828 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1209 05:50:56.855489 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:50:56.855525 1437114 retry.go:31] will retry after 519.882573ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:50:56.936655 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:50:56.993111 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1209 05:50:57.034512 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1209 05:50:57.059780 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:50:57.059861 1437114 retry.go:31] will retry after 724.517068ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 05:50:57.095702 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:50:57.095733 1437114 retry.go:31] will retry after 773.591416ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:50:57.376312 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1209 05:50:57.435557 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:50:57.435589 1437114 retry.go:31] will retry after 453.196958ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:50:57.436773 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:50:57.784620 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1209 05:50:57.844755 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:50:57.844791 1437114 retry.go:31] will retry after 1.262011023s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:50:57.869923 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 05:50:57.889536 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 05:50:57.936212 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1209 05:50:57.961431 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:50:57.961468 1437114 retry.go:31] will retry after 546.501311ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 05:50:58.032466 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:50:58.032501 1437114 retry.go:31] will retry after 1.229436669s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 05:50:54.684397 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:50:57.184110 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:50:59.184561 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	I1209 05:50:58.436310 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:50:58.508935 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1209 05:50:58.565163 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:50:58.565196 1437114 retry.go:31] will retry after 1.407912766s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:50:58.936676 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:50:59.107417 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1209 05:50:59.166291 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:50:59.166364 1437114 retry.go:31] will retry after 928.374807ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:50:59.262572 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1209 05:50:59.321942 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:50:59.321975 1437114 retry.go:31] will retry after 837.961471ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:50:59.436172 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:50:59.936839 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:50:59.973278 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 05:51:00.094961 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1209 05:51:00.122388 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:00.122508 1437114 retry.go:31] will retry after 2.37581771s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:00.163516 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1209 05:51:00.369038 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:00.369122 1437114 retry.go:31] will retry after 1.02409357s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 05:51:00.430845 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:00.430881 1437114 retry.go:31] will retry after 1.008529781s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:00.435975 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:00.935928 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:01.393811 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1209 05:51:01.436520 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:01.440060 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1209 05:51:01.479948 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:01.480008 1437114 retry.go:31] will retry after 3.887040249s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 05:51:01.521362 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:01.521394 1437114 retry.go:31] will retry after 2.488257731s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:01.936891 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:02.436059 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:02.499505 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1209 05:51:02.558807 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:02.558839 1437114 retry.go:31] will retry after 1.68559081s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:02.936227 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1209 05:51:01.683581 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:51:04.183570 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	I1209 05:51:03.436252 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:03.936492 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:04.009914 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1209 05:51:04.068567 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:04.068604 1437114 retry.go:31] will retry after 3.558332748s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:04.244680 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1209 05:51:04.309239 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:04.309330 1437114 retry.go:31] will retry after 5.213787505s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:04.436559 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:04.936651 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:05.367810 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1209 05:51:05.433548 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:05.433586 1437114 retry.go:31] will retry after 5.477878375s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:05.436872 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:05.936073 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:06.436593 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:06.936543 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:07.436871 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:07.628150 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1209 05:51:07.690629 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:07.690661 1437114 retry.go:31] will retry after 6.157660473s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:07.935908 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1209 05:51:06.183630 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:51:08.683544 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	I1209 05:51:08.436122 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:08.935959 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:09.436970 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:09.523671 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1209 05:51:09.581839 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:09.581914 1437114 retry.go:31] will retry after 9.601279523s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:09.936233 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:10.436178 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:10.911744 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1209 05:51:10.936618 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1209 05:51:11.040149 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:11.040187 1437114 retry.go:31] will retry after 9.211684326s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:11.436896 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:11.936862 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:12.435946 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:12.936781 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1209 05:51:10.683655 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:51:12.684274 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	I1209 05:51:13.436827 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:13.848647 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1209 05:51:13.909374 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:13.909406 1437114 retry.go:31] will retry after 5.044533036s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:13.936521 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:14.436557 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:14.935977 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:15.436310 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:15.936335 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:16.436628 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:16.936535 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:17.436311 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:17.935962 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1209 05:51:15.183508 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:51:17.183575 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:51:19.184498 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	I1209 05:51:18.435898 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:18.936142 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:18.955073 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1209 05:51:19.020072 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:19.020104 1437114 retry.go:31] will retry after 11.951102235s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:19.184688 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1209 05:51:19.284505 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:19.284538 1437114 retry.go:31] will retry after 12.030085055s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:19.435928 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:19.936763 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:20.252740 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1209 05:51:20.316752 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:20.316784 1437114 retry.go:31] will retry after 7.019613017s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:20.436227 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:20.936875 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:21.435907 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:21.935963 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:22.436158 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:22.936474 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1209 05:51:21.683564 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:51:23.683626 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:51:26.184579 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	I1209 05:51:27.683214 1429857 node_ready.go:38] duration metric: took 6m0.000146062s for node "no-preload-842269" to be "Ready" ...
	I1209 05:51:27.686512 1429857 out.go:203] 
	W1209 05:51:27.689522 1429857 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1209 05:51:27.689540 1429857 out.go:285] * 
	W1209 05:51:27.691657 1429857 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 05:51:27.694499 1429857 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 09 05:45:25 no-preload-842269 containerd[555]: time="2025-12-09T05:45:25.686544286Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 09 05:45:25 no-preload-842269 containerd[555]: time="2025-12-09T05:45:25.686621568Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 09 05:45:25 no-preload-842269 containerd[555]: time="2025-12-09T05:45:25.686720651Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 09 05:45:25 no-preload-842269 containerd[555]: time="2025-12-09T05:45:25.686789392Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 09 05:45:25 no-preload-842269 containerd[555]: time="2025-12-09T05:45:25.686856097Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 09 05:45:25 no-preload-842269 containerd[555]: time="2025-12-09T05:45:25.686918545Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 09 05:45:25 no-preload-842269 containerd[555]: time="2025-12-09T05:45:25.686973706Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 09 05:45:25 no-preload-842269 containerd[555]: time="2025-12-09T05:45:25.687041799Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 09 05:45:25 no-preload-842269 containerd[555]: time="2025-12-09T05:45:25.687108406Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 09 05:45:25 no-preload-842269 containerd[555]: time="2025-12-09T05:45:25.687193261Z" level=info msg="Connect containerd service"
	Dec 09 05:45:25 no-preload-842269 containerd[555]: time="2025-12-09T05:45:25.687520145Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 09 05:45:25 no-preload-842269 containerd[555]: time="2025-12-09T05:45:25.688289092Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 09 05:45:25 no-preload-842269 containerd[555]: time="2025-12-09T05:45:25.699337805Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 09 05:45:25 no-preload-842269 containerd[555]: time="2025-12-09T05:45:25.699416343Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 09 05:45:25 no-preload-842269 containerd[555]: time="2025-12-09T05:45:25.699485994Z" level=info msg="Start subscribing containerd event"
	Dec 09 05:45:25 no-preload-842269 containerd[555]: time="2025-12-09T05:45:25.700731392Z" level=info msg="Start recovering state"
	Dec 09 05:45:25 no-preload-842269 containerd[555]: time="2025-12-09T05:45:25.726934659Z" level=info msg="Start event monitor"
	Dec 09 05:45:25 no-preload-842269 containerd[555]: time="2025-12-09T05:45:25.727028597Z" level=info msg="Start cni network conf syncer for default"
	Dec 09 05:45:25 no-preload-842269 containerd[555]: time="2025-12-09T05:45:25.727048600Z" level=info msg="Start streaming server"
	Dec 09 05:45:25 no-preload-842269 containerd[555]: time="2025-12-09T05:45:25.727060752Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 09 05:45:25 no-preload-842269 containerd[555]: time="2025-12-09T05:45:25.727107495Z" level=info msg="runtime interface starting up..."
	Dec 09 05:45:25 no-preload-842269 containerd[555]: time="2025-12-09T05:45:25.727114871Z" level=info msg="starting plugins..."
	Dec 09 05:45:25 no-preload-842269 containerd[555]: time="2025-12-09T05:45:25.727324515Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 09 05:45:25 no-preload-842269 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 09 05:45:25 no-preload-842269 containerd[555]: time="2025-12-09T05:45:25.730766873Z" level=info msg="containerd successfully booted in 0.068739s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:51:28.785099    3844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:51:28.785762    3844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:51:28.786968    3844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:51:28.787432    3844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:51:28.788973    3844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +25.904254] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:14] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:16] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:18] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:19] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:30] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:32] overlayfs: idmapped layers are currently not supported
	[ +28.114653] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:33] overlayfs: idmapped layers are currently not supported
	[ +23.720849] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:34] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:35] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:36] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:37] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:38] overlayfs: idmapped layers are currently not supported
	[ +23.656275] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:39] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:57] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:58] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:00] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:02] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:03] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:05] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:06] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 9 05:31] overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	
	
	==> kernel <==
	 05:51:28 up  8:33,  0 user,  load average: 0.72, 0.71, 1.30
	Linux no-preload-842269 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 09 05:51:25 no-preload-842269 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 09 05:51:25 no-preload-842269 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 479.
	Dec 09 05:51:25 no-preload-842269 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 05:51:25 no-preload-842269 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 05:51:25 no-preload-842269 kubelet[3722]: E1209 05:51:25.978226    3722 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 09 05:51:25 no-preload-842269 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 09 05:51:25 no-preload-842269 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 09 05:51:26 no-preload-842269 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 480.
	Dec 09 05:51:26 no-preload-842269 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 05:51:26 no-preload-842269 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 05:51:26 no-preload-842269 kubelet[3728]: E1209 05:51:26.737319    3728 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 09 05:51:26 no-preload-842269 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 09 05:51:26 no-preload-842269 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 09 05:51:27 no-preload-842269 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 481.
	Dec 09 05:51:27 no-preload-842269 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 05:51:27 no-preload-842269 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 05:51:27 no-preload-842269 kubelet[3733]: E1209 05:51:27.483157    3733 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 09 05:51:27 no-preload-842269 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 09 05:51:27 no-preload-842269 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 09 05:51:28 no-preload-842269 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 482.
	Dec 09 05:51:28 no-preload-842269 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 05:51:28 no-preload-842269 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 05:51:28 no-preload-842269 kubelet[3753]: E1209 05:51:28.248581    3753 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 09 05:51:28 no-preload-842269 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 09 05:51:28 no-preload-842269 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-842269 -n no-preload-842269
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-842269 -n no-preload-842269: exit status 2 (377.742758ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "no-preload-842269" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (370.05s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (102.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-262540 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1209 05:49:09.296655 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/default-k8s-diff-port-564611/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 05:49:37.000219 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/default-k8s-diff-port-564611/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 05:50:32.751086 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-262540 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m40.494458205s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-262540 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-262540
helpers_test.go:243: (dbg) docker inspect newest-cni-262540:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ed3de5d59c962edb9b6bef3201cdaec7fe174aa520c616b7fefbc3014b60f5d7",
	        "Created": "2025-12-09T05:40:46.656747886Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1422815,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-09T05:40:46.750006721Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:e4eb91ed18a24161fce60c7cdd660144ecd5b8c5029dc2dea2c5e423c2f48ce4",
	        "ResolvConfPath": "/var/lib/docker/containers/ed3de5d59c962edb9b6bef3201cdaec7fe174aa520c616b7fefbc3014b60f5d7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ed3de5d59c962edb9b6bef3201cdaec7fe174aa520c616b7fefbc3014b60f5d7/hostname",
	        "HostsPath": "/var/lib/docker/containers/ed3de5d59c962edb9b6bef3201cdaec7fe174aa520c616b7fefbc3014b60f5d7/hosts",
	        "LogPath": "/var/lib/docker/containers/ed3de5d59c962edb9b6bef3201cdaec7fe174aa520c616b7fefbc3014b60f5d7/ed3de5d59c962edb9b6bef3201cdaec7fe174aa520c616b7fefbc3014b60f5d7-json.log",
	        "Name": "/newest-cni-262540",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-262540:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-262540",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ed3de5d59c962edb9b6bef3201cdaec7fe174aa520c616b7fefbc3014b60f5d7",
	                "LowerDir": "/var/lib/docker/overlay2/6415c7c451ba9f1315edabee60b733d482ef79cadbd27911dea3809654b6c1ec-init/diff:/var/lib/docker/overlay2/c44bb57aa59cc265266f37f2bb6e7ec0e7d641c3b4aeaa57e6d23deec6f0d1d4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6415c7c451ba9f1315edabee60b733d482ef79cadbd27911dea3809654b6c1ec/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6415c7c451ba9f1315edabee60b733d482ef79cadbd27911dea3809654b6c1ec/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6415c7c451ba9f1315edabee60b733d482ef79cadbd27911dea3809654b6c1ec/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-262540",
	                "Source": "/var/lib/docker/volumes/newest-cni-262540/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-262540",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-262540",
	                "name.minikube.sigs.k8s.io": "newest-cni-262540",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9954a06c834f33e28b10a23b7f87c831e396c1056f7a6615dc76e0d514d93454",
	            "SandboxKey": "/var/run/docker/netns/9954a06c834f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34205"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34206"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34209"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34207"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34208"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-262540": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ba:02:a6:df:bc:8f",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "aa89e26051ba524ceb1352e47e7602df84b3dfd74bbc435c72069a1036fceebf",
	                    "EndpointID": "efb22bfc5d2fa7cd356d48b051835d563f10405c6482b333b29bcce636ebb681",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-262540",
	                        "ed3de5d59c96"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-262540 -n newest-cni-262540
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-262540 -n newest-cni-262540: exit status 6 (336.751659ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1209 05:50:45.573884 1436595 status.go:458] kubeconfig endpoint: get endpoint: "newest-cni-262540" does not appear in /home/jenkins/minikube-integration/22081-1142328/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-262540 logs -n 25
helpers_test.go:260: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─
────────────────────┐
	│ COMMAND │                                                                                                                            ARGS                                                                                                                            │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─
────────────────────┤
	│ addons  │ enable metrics-server -p embed-certs-432108 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                   │ embed-certs-432108           │ jenkins │ v1.37.0 │ 09 Dec 25 05:36 UTC │ 09 Dec 25 05:36 UTC │
	│ stop    │ -p embed-certs-432108 --alsologtostderr -v=3                                                                                                                                                                                                               │ embed-certs-432108           │ jenkins │ v1.37.0 │ 09 Dec 25 05:36 UTC │ 09 Dec 25 05:36 UTC │
	│ addons  │ enable dashboard -p embed-certs-432108 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                              │ embed-certs-432108           │ jenkins │ v1.37.0 │ 09 Dec 25 05:36 UTC │ 09 Dec 25 05:36 UTC │
	│ start   │ -p embed-certs-432108 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-432108           │ jenkins │ v1.37.0 │ 09 Dec 25 05:36 UTC │ 09 Dec 25 05:37 UTC │
	│ image   │ embed-certs-432108 image list --format=json                                                                                                                                                                                                                │ embed-certs-432108           │ jenkins │ v1.37.0 │ 09 Dec 25 05:37 UTC │ 09 Dec 25 05:37 UTC │
	│ pause   │ -p embed-certs-432108 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-432108           │ jenkins │ v1.37.0 │ 09 Dec 25 05:37 UTC │ 09 Dec 25 05:37 UTC │
	│ unpause │ -p embed-certs-432108 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-432108           │ jenkins │ v1.37.0 │ 09 Dec 25 05:37 UTC │ 09 Dec 25 05:37 UTC │
	│ delete  │ -p embed-certs-432108                                                                                                                                                                                                                                      │ embed-certs-432108           │ jenkins │ v1.37.0 │ 09 Dec 25 05:37 UTC │ 09 Dec 25 05:37 UTC │
	│ delete  │ -p embed-certs-432108                                                                                                                                                                                                                                      │ embed-certs-432108           │ jenkins │ v1.37.0 │ 09 Dec 25 05:37 UTC │ 09 Dec 25 05:37 UTC │
	│ start   │ -p default-k8s-diff-port-564611 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-564611 │ jenkins │ v1.37.0 │ 09 Dec 25 05:37 UTC │ 09 Dec 25 05:39 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-564611 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                         │ default-k8s-diff-port-564611 │ jenkins │ v1.37.0 │ 09 Dec 25 05:39 UTC │ 09 Dec 25 05:39 UTC │
	│ stop    │ -p default-k8s-diff-port-564611 --alsologtostderr -v=3                                                                                                                                                                                                     │ default-k8s-diff-port-564611 │ jenkins │ v1.37.0 │ 09 Dec 25 05:39 UTC │ 09 Dec 25 05:39 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-564611 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ default-k8s-diff-port-564611 │ jenkins │ v1.37.0 │ 09 Dec 25 05:39 UTC │ 09 Dec 25 05:39 UTC │
	│ start   │ -p default-k8s-diff-port-564611 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-564611 │ jenkins │ v1.37.0 │ 09 Dec 25 05:39 UTC │ 09 Dec 25 05:40 UTC │
	│ image   │ default-k8s-diff-port-564611 image list --format=json                                                                                                                                                                                                      │ default-k8s-diff-port-564611 │ jenkins │ v1.37.0 │ 09 Dec 25 05:40 UTC │ 09 Dec 25 05:40 UTC │
	│ pause   │ -p default-k8s-diff-port-564611 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-564611 │ jenkins │ v1.37.0 │ 09 Dec 25 05:40 UTC │ 09 Dec 25 05:40 UTC │
	│ unpause │ -p default-k8s-diff-port-564611 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-564611 │ jenkins │ v1.37.0 │ 09 Dec 25 05:40 UTC │ 09 Dec 25 05:40 UTC │
	│ delete  │ -p default-k8s-diff-port-564611                                                                                                                                                                                                                            │ default-k8s-diff-port-564611 │ jenkins │ v1.37.0 │ 09 Dec 25 05:40 UTC │ 09 Dec 25 05:40 UTC │
	│ delete  │ -p default-k8s-diff-port-564611                                                                                                                                                                                                                            │ default-k8s-diff-port-564611 │ jenkins │ v1.37.0 │ 09 Dec 25 05:40 UTC │ 09 Dec 25 05:40 UTC │
	│ start   │ -p newest-cni-262540 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-262540            │ jenkins │ v1.37.0 │ 09 Dec 25 05:40 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-842269 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ no-preload-842269            │ jenkins │ v1.37.0 │ 09 Dec 25 05:43 UTC │                     │
	│ stop    │ -p no-preload-842269 --alsologtostderr -v=3                                                                                                                                                                                                                │ no-preload-842269            │ jenkins │ v1.37.0 │ 09 Dec 25 05:45 UTC │ 09 Dec 25 05:45 UTC │
	│ addons  │ enable dashboard -p no-preload-842269 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                               │ no-preload-842269            │ jenkins │ v1.37.0 │ 09 Dec 25 05:45 UTC │ 09 Dec 25 05:45 UTC │
	│ start   │ -p no-preload-842269 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-842269            │ jenkins │ v1.37.0 │ 09 Dec 25 05:45 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-262540 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ newest-cni-262540            │ jenkins │ v1.37.0 │ 09 Dec 25 05:49 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─
────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/09 05:45:19
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 05:45:19.304985 1429857 out.go:360] Setting OutFile to fd 1 ...
	I1209 05:45:19.305094 1429857 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 05:45:19.305101 1429857 out.go:374] Setting ErrFile to fd 2...
	I1209 05:45:19.305106 1429857 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 05:45:19.305469 1429857 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-1142328/.minikube/bin
	I1209 05:45:19.305897 1429857 out.go:368] Setting JSON to false
	I1209 05:45:19.307371 1429857 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":30443,"bootTime":1765228677,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1209 05:45:19.307474 1429857 start.go:143] virtualization:  
	I1209 05:45:19.312362 1429857 out.go:179] * [no-preload-842269] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1209 05:45:19.315432 1429857 out.go:179]   - MINIKUBE_LOCATION=22081
	I1209 05:45:19.315644 1429857 notify.go:221] Checking for updates...
	I1209 05:45:19.321156 1429857 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 05:45:19.324049 1429857 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22081-1142328/kubeconfig
	I1209 05:45:19.326954 1429857 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-1142328/.minikube
	I1209 05:45:19.329810 1429857 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1209 05:45:19.332669 1429857 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 05:45:19.336051 1429857 config.go:182] Loaded profile config "no-preload-842269": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1209 05:45:19.336708 1429857 driver.go:422] Setting default libvirt URI to qemu:///system
	I1209 05:45:19.364223 1429857 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1209 05:45:19.364347 1429857 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 05:45:19.423199 1429857 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-09 05:45:19.414226912 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1209 05:45:19.423304 1429857 docker.go:319] overlay module found
	I1209 05:45:19.426467 1429857 out.go:179] * Using the docker driver based on existing profile
	I1209 05:45:19.429450 1429857 start.go:309] selected driver: docker
	I1209 05:45:19.429469 1429857 start.go:927] validating driver "docker" against &{Name:no-preload-842269 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-842269 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2
62144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 05:45:19.429573 1429857 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 05:45:19.430271 1429857 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 05:45:19.484934 1429857 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-09 05:45:19.476108747 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1209 05:45:19.485260 1429857 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 05:45:19.485294 1429857 cni.go:84] Creating CNI manager for ""
	I1209 05:45:19.485352 1429857 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1209 05:45:19.485394 1429857 start.go:353] cluster config:
	{Name:no-preload-842269 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-842269 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 05:45:19.488591 1429857 out.go:179] * Starting "no-preload-842269" primary control-plane node in "no-preload-842269" cluster
	I1209 05:45:19.491427 1429857 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1209 05:45:19.494310 1429857 out.go:179] * Pulling base image v0.0.48-1765184860-22066 ...
	I1209 05:45:19.497153 1429857 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1209 05:45:19.497221 1429857 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local docker daemon
	I1209 05:45:19.497291 1429857 profile.go:143] Saving config to /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/no-preload-842269/config.json ...
	I1209 05:45:19.497571 1429857 cache.go:107] acquiring lock: {Name:mkf65d4ffaf3daf987b7ba0301a9962f00106981 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 05:45:19.497666 1429857 cache.go:115] /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1209 05:45:19.497678 1429857 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22081-1142328/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 116.666µs
	I1209 05:45:19.497690 1429857 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1209 05:45:19.497702 1429857 cache.go:107] acquiring lock: {Name:mk4d0c4ab95f11691dbecfbd7b2c72b3028abf9f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 05:45:19.497735 1429857 cache.go:115] /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1209 05:45:19.497745 1429857 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22081-1142328/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 45.152µs
	I1209 05:45:19.497752 1429857 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1209 05:45:19.497766 1429857 cache.go:107] acquiring lock: {Name:mk7cb8e420e05ffddcb417dedf3ddace46afcf1b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 05:45:19.497807 1429857 cache.go:115] /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1209 05:45:19.497815 1429857 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22081-1142328/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 50.033µs
	I1209 05:45:19.497822 1429857 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1209 05:45:19.497835 1429857 cache.go:107] acquiring lock: {Name:mka2eb1b7c29ae7ae604d5f65c47b25198cfb45b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 05:45:19.497867 1429857 cache.go:115] /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1209 05:45:19.497876 1429857 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22081-1142328/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 42.009µs
	I1209 05:45:19.497883 1429857 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1209 05:45:19.497892 1429857 cache.go:107] acquiring lock: {Name:mkade1779cb2ecc1c54a36bd1719bf2ef87bdf51 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 05:45:19.497922 1429857 cache.go:115] /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1209 05:45:19.497931 1429857 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22081-1142328/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 40.704µs
	I1209 05:45:19.497942 1429857 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1209 05:45:19.497955 1429857 cache.go:107] acquiring lock: {Name:mk604b76e7428f7b39bf507a7086fea810617cc7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 05:45:19.497987 1429857 cache.go:115] /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1209 05:45:19.497996 1429857 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22081-1142328/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 42.46µs
	I1209 05:45:19.498002 1429857 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1209 05:45:19.498011 1429857 cache.go:107] acquiring lock: {Name:mk605cb0bdcc667f1a6cc01dc2d318b41822c88f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 05:45:19.498037 1429857 cache.go:115] /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1209 05:45:19.498046 1429857 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/22081-1142328/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 36.306µs
	I1209 05:45:19.498052 1429857 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1209 05:45:19.498060 1429857 cache.go:107] acquiring lock: {Name:mk288542758fec96b5cb8ac3de75700c31bfbfc0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 05:45:19.498089 1429857 cache.go:115] /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1209 05:45:19.498098 1429857 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22081-1142328/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 38.916µs
	I1209 05:45:19.498104 1429857 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1209 05:45:19.498110 1429857 cache.go:87] Successfully saved all images to host disk.
	I1209 05:45:19.517152 1429857 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local docker daemon, skipping pull
	I1209 05:45:19.517175 1429857 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c exists in daemon, skipping load
	I1209 05:45:19.517194 1429857 cache.go:243] Successfully downloaded all kic artifacts
	I1209 05:45:19.517225 1429857 start.go:360] acquireMachinesLock for no-preload-842269: {Name:mk19b7be61094a19b29603fb95f6d7b282529614 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 05:45:19.517288 1429857 start.go:364] duration metric: took 43.707µs to acquireMachinesLock for "no-preload-842269"
	I1209 05:45:19.517311 1429857 start.go:96] Skipping create...Using existing machine configuration
	I1209 05:45:19.517320 1429857 fix.go:54] fixHost starting: 
	I1209 05:45:19.517582 1429857 cli_runner.go:164] Run: docker container inspect no-preload-842269 --format={{.State.Status}}
	I1209 05:45:19.535058 1429857 fix.go:112] recreateIfNeeded on no-preload-842269: state=Stopped err=<nil>
	W1209 05:45:19.535086 1429857 fix.go:138] unexpected machine state, will restart: <nil>
	I1209 05:45:19.538423 1429857 out.go:252] * Restarting existing docker container for "no-preload-842269" ...
	I1209 05:45:19.538508 1429857 cli_runner.go:164] Run: docker start no-preload-842269
	I1209 05:45:19.801093 1429857 cli_runner.go:164] Run: docker container inspect no-preload-842269 --format={{.State.Status}}
	I1209 05:45:19.824109 1429857 kic.go:430] container "no-preload-842269" state is running.
	I1209 05:45:19.824800 1429857 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-842269
	I1209 05:45:19.850927 1429857 profile.go:143] Saving config to /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/no-preload-842269/config.json ...
	I1209 05:45:19.851169 1429857 machine.go:94] provisionDockerMachine start ...
	I1209 05:45:19.851233 1429857 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-842269
	I1209 05:45:19.872359 1429857 main.go:143] libmachine: Using SSH client type: native
	I1209 05:45:19.872683 1429857 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 34210 <nil> <nil>}
	I1209 05:45:19.872698 1429857 main.go:143] libmachine: About to run SSH command:
	hostname
	I1209 05:45:19.873510 1429857 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1209 05:45:23.031698 1429857 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-842269
	
	I1209 05:45:23.031723 1429857 ubuntu.go:182] provisioning hostname "no-preload-842269"
	I1209 05:45:23.031788 1429857 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-842269
	I1209 05:45:23.049528 1429857 main.go:143] libmachine: Using SSH client type: native
	I1209 05:45:23.049842 1429857 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 34210 <nil> <nil>}
	I1209 05:45:23.049866 1429857 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-842269 && echo "no-preload-842269" | sudo tee /etc/hostname
	I1209 05:45:23.212560 1429857 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-842269
	
	I1209 05:45:23.212638 1429857 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-842269
	I1209 05:45:23.230939 1429857 main.go:143] libmachine: Using SSH client type: native
	I1209 05:45:23.231248 1429857 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 34210 <nil> <nil>}
	I1209 05:45:23.231264 1429857 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-842269' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-842269/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-842269' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 05:45:23.384444 1429857 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1209 05:45:23.384483 1429857 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22081-1142328/.minikube CaCertPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22081-1142328/.minikube}
	I1209 05:45:23.384506 1429857 ubuntu.go:190] setting up certificates
	I1209 05:45:23.384523 1429857 provision.go:84] configureAuth start
	I1209 05:45:23.384590 1429857 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-842269
	I1209 05:45:23.401432 1429857 provision.go:143] copyHostCerts
	I1209 05:45:23.401503 1429857 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.pem, removing ...
	I1209 05:45:23.401518 1429857 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.pem
	I1209 05:45:23.401593 1429857 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.pem (1078 bytes)
	I1209 05:45:23.401705 1429857 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-1142328/.minikube/cert.pem, removing ...
	I1209 05:45:23.401714 1429857 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-1142328/.minikube/cert.pem
	I1209 05:45:23.401742 1429857 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22081-1142328/.minikube/cert.pem (1123 bytes)
	I1209 05:45:23.401834 1429857 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-1142328/.minikube/key.pem, removing ...
	I1209 05:45:23.401844 1429857 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-1142328/.minikube/key.pem
	I1209 05:45:23.401870 1429857 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22081-1142328/.minikube/key.pem (1675 bytes)
	I1209 05:45:23.401918 1429857 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca-key.pem org=jenkins.no-preload-842269 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-842269]
	I1209 05:45:24.117829 1429857 provision.go:177] copyRemoteCerts
	I1209 05:45:24.117899 1429857 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 05:45:24.117948 1429857 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-842269
	I1209 05:45:24.136847 1429857 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34210 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/no-preload-842269/id_rsa Username:docker}
	I1209 05:45:24.243917 1429857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1209 05:45:24.261228 1429857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1209 05:45:24.278688 1429857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1209 05:45:24.295602 1429857 provision.go:87] duration metric: took 911.052498ms to configureAuth
	I1209 05:45:24.295630 1429857 ubuntu.go:206] setting minikube options for container-runtime
	I1209 05:45:24.295821 1429857 config.go:182] Loaded profile config "no-preload-842269": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1209 05:45:24.295834 1429857 machine.go:97] duration metric: took 4.444658101s to provisionDockerMachine
	I1209 05:45:24.295843 1429857 start.go:293] postStartSetup for "no-preload-842269" (driver="docker")
	I1209 05:45:24.295853 1429857 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 05:45:24.295939 1429857 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 05:45:24.295989 1429857 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-842269
	I1209 05:45:24.313358 1429857 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34210 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/no-preload-842269/id_rsa Username:docker}
	I1209 05:45:24.419729 1429857 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 05:45:24.423044 1429857 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1209 05:45:24.423074 1429857 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1209 05:45:24.423102 1429857 filesync.go:126] Scanning /home/jenkins/minikube-integration/22081-1142328/.minikube/addons for local assets ...
	I1209 05:45:24.423160 1429857 filesync.go:126] Scanning /home/jenkins/minikube-integration/22081-1142328/.minikube/files for local assets ...
	I1209 05:45:24.423286 1429857 filesync.go:149] local asset: /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/ssl/certs/11442312.pem -> 11442312.pem in /etc/ssl/certs
	I1209 05:45:24.423403 1429857 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 05:45:24.430577 1429857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/ssl/certs/11442312.pem --> /etc/ssl/certs/11442312.pem (1708 bytes)
	I1209 05:45:24.448642 1429857 start.go:296] duration metric: took 152.783704ms for postStartSetup
	I1209 05:45:24.448752 1429857 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1209 05:45:24.448804 1429857 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-842269
	I1209 05:45:24.475577 1429857 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34210 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/no-preload-842269/id_rsa Username:docker}
	I1209 05:45:24.577211 1429857 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1209 05:45:24.581897 1429857 fix.go:56] duration metric: took 5.064569479s for fixHost
	I1209 05:45:24.581929 1429857 start.go:83] releasing machines lock for "no-preload-842269", held for 5.064623763s
	I1209 05:45:24.582003 1429857 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-842269
	I1209 05:45:24.598849 1429857 ssh_runner.go:195] Run: cat /version.json
	I1209 05:45:24.598910 1429857 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-842269
	I1209 05:45:24.599176 1429857 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 05:45:24.599236 1429857 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-842269
	I1209 05:45:24.617491 1429857 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34210 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/no-preload-842269/id_rsa Username:docker}
	I1209 05:45:24.625861 1429857 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34210 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/no-preload-842269/id_rsa Username:docker}
	I1209 05:45:24.719702 1429857 ssh_runner.go:195] Run: systemctl --version
	I1209 05:45:24.811867 1429857 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 05:45:24.816351 1429857 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 05:45:24.816436 1429857 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 05:45:24.824370 1429857 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1209 05:45:24.824393 1429857 start.go:496] detecting cgroup driver to use...
	I1209 05:45:24.824424 1429857 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1209 05:45:24.824478 1429857 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1209 05:45:24.842259 1429857 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1209 05:45:24.856877 1429857 docker.go:218] disabling cri-docker service (if available) ...
	I1209 05:45:24.856943 1429857 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 05:45:24.872872 1429857 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 05:45:24.886154 1429857 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 05:45:24.999208 1429857 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 05:45:25.121326 1429857 docker.go:234] disabling docker service ...
	I1209 05:45:25.121413 1429857 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 05:45:25.137073 1429857 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 05:45:25.150656 1429857 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 05:45:25.286510 1429857 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 05:45:25.394076 1429857 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 05:45:25.406549 1429857 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 05:45:25.420965 1429857 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1209 05:45:25.429321 1429857 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1209 05:45:25.437986 1429857 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1209 05:45:25.438077 1429857 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1209 05:45:25.447132 1429857 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1209 05:45:25.456037 1429857 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1209 05:45:25.464470 1429857 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1209 05:45:25.472760 1429857 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 05:45:25.480756 1429857 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1209 05:45:25.489194 1429857 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1209 05:45:25.497557 1429857 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1209 05:45:25.506153 1429857 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 05:45:25.513357 1429857 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 05:45:25.520101 1429857 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 05:45:25.626477 1429857 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1209 05:45:25.729432 1429857 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1209 05:45:25.729500 1429857 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1209 05:45:25.733824 1429857 start.go:564] Will wait 60s for crictl version
	I1209 05:45:25.733937 1429857 ssh_runner.go:195] Run: which crictl
	I1209 05:45:25.738223 1429857 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1209 05:45:25.764110 1429857 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1209 05:45:25.764179 1429857 ssh_runner.go:195] Run: containerd --version
	I1209 05:45:25.784097 1429857 ssh_runner.go:195] Run: containerd --version
	I1209 05:45:25.809525 1429857 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1209 05:45:25.812650 1429857 cli_runner.go:164] Run: docker network inspect no-preload-842269 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1209 05:45:25.828380 1429857 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1209 05:45:25.832220 1429857 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 05:45:25.842204 1429857 kubeadm.go:884] updating cluster {Name:no-preload-842269 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-842269 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 05:45:25.842335 1429857 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1209 05:45:25.842398 1429857 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 05:45:25.869412 1429857 containerd.go:627] all images are preloaded for containerd runtime.
	I1209 05:45:25.869438 1429857 cache_images.go:86] Images are preloaded, skipping loading
	I1209 05:45:25.869445 1429857 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1209 05:45:25.869544 1429857 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-842269 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-842269 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 05:45:25.869609 1429857 ssh_runner.go:195] Run: sudo crictl info
	I1209 05:45:25.894672 1429857 cni.go:84] Creating CNI manager for ""
	I1209 05:45:25.894698 1429857 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1209 05:45:25.894720 1429857 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1209 05:45:25.894751 1429857 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-842269 NodeName:no-preload-842269 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1209 05:45:25.894907 1429857 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-842269"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 05:45:25.894981 1429857 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1209 05:45:25.902766 1429857 binaries.go:51] Found k8s binaries, skipping transfer
	I1209 05:45:25.902838 1429857 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1209 05:45:25.910455 1429857 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1209 05:45:25.923076 1429857 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1209 05:45:25.937650 1429857 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1209 05:45:25.951420 1429857 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1209 05:45:25.955331 1429857 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 05:45:25.964795 1429857 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 05:45:26.082166 1429857 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 05:45:26.100679 1429857 certs.go:69] Setting up /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/no-preload-842269 for IP: 192.168.85.2
	I1209 05:45:26.100745 1429857 certs.go:195] generating shared ca certs ...
	I1209 05:45:26.100786 1429857 certs.go:227] acquiring lock for ca certs: {Name:mk15788702f8c4e23b5aeab3f44961d296fab259 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 05:45:26.100943 1429857 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.key
	I1209 05:45:26.101025 1429857 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/proxy-client-ca.key
	I1209 05:45:26.101056 1429857 certs.go:257] generating profile certs ...
	I1209 05:45:26.101186 1429857 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/no-preload-842269/client.key
	I1209 05:45:26.101295 1429857 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/no-preload-842269/apiserver.key.135a6aab
	I1209 05:45:26.101368 1429857 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/no-preload-842269/proxy-client.key
	I1209 05:45:26.101513 1429857 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/1144231.pem (1338 bytes)
	W1209 05:45:26.101579 1429857 certs.go:480] ignoring /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/1144231_empty.pem, impossibly tiny 0 bytes
	I1209 05:45:26.101605 1429857 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca-key.pem (1679 bytes)
	I1209 05:45:26.101652 1429857 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem (1078 bytes)
	I1209 05:45:26.101704 1429857 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/cert.pem (1123 bytes)
	I1209 05:45:26.101777 1429857 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/key.pem (1675 bytes)
	I1209 05:45:26.101861 1429857 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/ssl/certs/11442312.pem (1708 bytes)
	I1209 05:45:26.102562 1429857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 05:45:26.122800 1429857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1209 05:45:26.142042 1429857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 05:45:26.161502 1429857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1209 05:45:26.179586 1429857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/no-preload-842269/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1209 05:45:26.196698 1429857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/no-preload-842269/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1209 05:45:26.212945 1429857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/no-preload-842269/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 05:45:26.230416 1429857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/no-preload-842269/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1209 05:45:26.247147 1429857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/ssl/certs/11442312.pem --> /usr/share/ca-certificates/11442312.pem (1708 bytes)
	I1209 05:45:26.265734 1429857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 05:45:26.282961 1429857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/1144231.pem --> /usr/share/ca-certificates/1144231.pem (1338 bytes)
	I1209 05:45:26.300125 1429857 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 05:45:26.312156 1429857 ssh_runner.go:195] Run: openssl version
	I1209 05:45:26.318566 1429857 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/11442312.pem
	I1209 05:45:26.329117 1429857 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/11442312.pem /etc/ssl/certs/11442312.pem
	I1209 05:45:26.336403 1429857 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11442312.pem
	I1209 05:45:26.340126 1429857 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 04:18 /usr/share/ca-certificates/11442312.pem
	I1209 05:45:26.340197 1429857 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11442312.pem
	I1209 05:45:26.383366 1429857 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1209 05:45:26.390871 1429857 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1209 05:45:26.398106 1429857 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1209 05:45:26.405814 1429857 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 05:45:26.409683 1429857 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 04:09 /usr/share/ca-certificates/minikubeCA.pem
	I1209 05:45:26.409750 1429857 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 05:45:26.450573 1429857 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1209 05:45:26.458322 1429857 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1144231.pem
	I1209 05:45:26.465833 1429857 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1144231.pem /etc/ssl/certs/1144231.pem
	I1209 05:45:26.473482 1429857 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1144231.pem
	I1209 05:45:26.477501 1429857 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 04:18 /usr/share/ca-certificates/1144231.pem
	I1209 05:45:26.477569 1429857 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1144231.pem
	I1209 05:45:26.518776 1429857 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1209 05:45:26.526248 1429857 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 05:45:26.529980 1429857 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1209 05:45:26.572441 1429857 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1209 05:45:26.613785 1429857 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1209 05:45:26.655322 1429857 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1209 05:45:26.696546 1429857 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1209 05:45:26.739135 1429857 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1209 05:45:26.780278 1429857 kubeadm.go:401] StartCluster: {Name:no-preload-842269 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-842269 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 05:45:26.780376 1429857 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1209 05:45:26.780450 1429857 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 05:45:26.805821 1429857 cri.go:89] found id: ""
	I1209 05:45:26.805924 1429857 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1209 05:45:26.813920 1429857 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1209 05:45:26.813941 1429857 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1209 05:45:26.814022 1429857 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1209 05:45:26.821515 1429857 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1209 05:45:26.821952 1429857 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-842269" does not appear in /home/jenkins/minikube-integration/22081-1142328/kubeconfig
	I1209 05:45:26.822061 1429857 kubeconfig.go:62] /home/jenkins/minikube-integration/22081-1142328/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-842269" cluster setting kubeconfig missing "no-preload-842269" context setting]
	I1209 05:45:26.822332 1429857 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1142328/kubeconfig: {Name:mk0dc127429f88dc4fdfb2d110deebc58207c1b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 05:45:26.823581 1429857 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1209 05:45:26.832049 1429857 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1209 05:45:26.832081 1429857 kubeadm.go:602] duration metric: took 18.134254ms to restartPrimaryControlPlane
	I1209 05:45:26.832090 1429857 kubeadm.go:403] duration metric: took 51.823986ms to StartCluster
	I1209 05:45:26.832105 1429857 settings.go:142] acquiring lock: {Name:mk8fa744e3d74bf8a1cbf5ac275c9f1969ad91a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 05:45:26.832161 1429857 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22081-1142328/kubeconfig
	I1209 05:45:26.832776 1429857 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1142328/kubeconfig: {Name:mk0dc127429f88dc4fdfb2d110deebc58207c1b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 05:45:26.832985 1429857 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1209 05:45:26.833266 1429857 config.go:182] Loaded profile config "no-preload-842269": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1209 05:45:26.833313 1429857 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1209 05:45:26.833376 1429857 addons.go:70] Setting storage-provisioner=true in profile "no-preload-842269"
	I1209 05:45:26.833395 1429857 addons.go:239] Setting addon storage-provisioner=true in "no-preload-842269"
	I1209 05:45:26.833419 1429857 host.go:66] Checking if "no-preload-842269" exists ...
	I1209 05:45:26.833892 1429857 cli_runner.go:164] Run: docker container inspect no-preload-842269 --format={{.State.Status}}
	I1209 05:45:26.834299 1429857 addons.go:70] Setting default-storageclass=true in profile "no-preload-842269"
	I1209 05:45:26.834336 1429857 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-842269"
	I1209 05:45:26.834606 1429857 cli_runner.go:164] Run: docker container inspect no-preload-842269 --format={{.State.Status}}
	I1209 05:45:26.836968 1429857 addons.go:70] Setting dashboard=true in profile "no-preload-842269"
	I1209 05:45:26.837045 1429857 addons.go:239] Setting addon dashboard=true in "no-preload-842269"
	W1209 05:45:26.837069 1429857 addons.go:248] addon dashboard should already be in state true
	I1209 05:45:26.837176 1429857 host.go:66] Checking if "no-preload-842269" exists ...
	I1209 05:45:26.838703 1429857 cli_runner.go:164] Run: docker container inspect no-preload-842269 --format={{.State.Status}}
	I1209 05:45:26.840073 1429857 out.go:179] * Verifying Kubernetes components...
	I1209 05:45:26.843169 1429857 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 05:45:26.862933 1429857 addons.go:239] Setting addon default-storageclass=true in "no-preload-842269"
	I1209 05:45:26.862982 1429857 host.go:66] Checking if "no-preload-842269" exists ...
	I1209 05:45:26.863397 1429857 cli_runner.go:164] Run: docker container inspect no-preload-842269 --format={{.State.Status}}
	I1209 05:45:26.876649 1429857 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 05:45:26.882278 1429857 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 05:45:26.882312 1429857 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1209 05:45:26.882383 1429857 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-842269
	I1209 05:45:26.897424 1429857 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1209 05:45:26.900169 1429857 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1209 05:45:26.906221 1429857 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1209 05:45:26.906259 1429857 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1209 05:45:26.906343 1429857 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-842269
	I1209 05:45:26.924326 1429857 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1209 05:45:26.924348 1429857 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1209 05:45:26.924420 1429857 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-842269
	I1209 05:45:26.963193 1429857 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34210 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/no-preload-842269/id_rsa Username:docker}
	I1209 05:45:26.976834 1429857 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34210 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/no-preload-842269/id_rsa Username:docker}
	I1209 05:45:26.980349 1429857 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34210 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/no-preload-842269/id_rsa Username:docker}
	I1209 05:45:27.069732 1429857 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 05:45:27.125899 1429857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 05:45:27.154169 1429857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1209 05:45:27.157146 1429857 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1209 05:45:27.157166 1429857 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1209 05:45:27.225908 1429857 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1209 05:45:27.225931 1429857 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1209 05:45:27.240153 1429857 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1209 05:45:27.240176 1429857 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1209 05:45:27.253621 1429857 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1209 05:45:27.253645 1429857 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1209 05:45:27.266747 1429857 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1209 05:45:27.266820 1429857 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1209 05:45:27.280090 1429857 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1209 05:45:27.280113 1429857 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1209 05:45:27.292756 1429857 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1209 05:45:27.292820 1429857 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1209 05:45:27.305913 1429857 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1209 05:45:27.305935 1429857 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1209 05:45:27.318675 1429857 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1209 05:45:27.318701 1429857 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1209 05:45:27.331338 1429857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1209 05:45:27.683021 1429857 node_ready.go:35] waiting up to 6m0s for node "no-preload-842269" to be "Ready" ...
	W1209 05:45:27.683361 1429857 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:27.683395 1429857 retry.go:31] will retry after 184.58375ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 05:45:27.683444 1429857 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:27.683451 1429857 retry.go:31] will retry after 269.389918ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 05:45:27.683630 1429857 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:27.683645 1429857 retry.go:31] will retry after 361.009314ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:27.869176 1429857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1209 05:45:27.925658 1429857 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:27.925689 1429857 retry.go:31] will retry after 219.894467ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:27.953869 1429857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1209 05:45:28.020255 1429857 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:28.020343 1429857 retry.go:31] will retry after 279.215289ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:28.045549 1429857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1209 05:45:28.108956 1429857 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:28.108989 1429857 retry.go:31] will retry after 273.063822ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:28.146313 1429857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1209 05:45:28.216595 1429857 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:28.216628 1429857 retry.go:31] will retry after 381.056559ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:28.300048 1429857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1209 05:45:28.357345 1429857 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:28.357379 1429857 retry.go:31] will retry after 809.396818ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:28.382541 1429857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1209 05:45:28.448575 1429857 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:28.448609 1429857 retry.go:31] will retry after 547.183213ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:28.597889 1429857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1209 05:45:28.654047 1429857 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:28.654084 1429857 retry.go:31] will retry after 1.262178547s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:28.996073 1429857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1209 05:45:29.058678 1429857 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:29.058721 1429857 retry.go:31] will retry after 492.162637ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:29.167844 1429857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1209 05:45:29.255905 1429857 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:29.255979 1429857 retry.go:31] will retry after 677.449885ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:29.551561 1429857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1209 05:45:29.613116 1429857 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:29.613148 1429857 retry.go:31] will retry after 949.934015ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 05:45:29.683816 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	I1209 05:45:29.917380 1429857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 05:45:29.933951 1429857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1209 05:45:30.056298 1429857 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:30.056338 1429857 retry.go:31] will retry after 692.239155ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 05:45:30.056406 1429857 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:30.056419 1429857 retry.go:31] will retry after 1.787501236s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:30.563380 1429857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1209 05:45:30.625844 1429857 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:30.625911 1429857 retry.go:31] will retry after 1.269031662s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:30.749550 1429857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1209 05:45:30.807105 1429857 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:30.807136 1429857 retry.go:31] will retry after 2.270752641s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 05:45:31.684175 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	I1209 05:45:31.844648 1429857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 05:45:31.895165 1429857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1209 05:45:31.914099 1429857 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:31.914132 1429857 retry.go:31] will retry after 1.693137355s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 05:45:31.975352 1429857 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:31.975398 1429857 retry.go:31] will retry after 3.456836552s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:33.078099 1429857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1209 05:45:33.136837 1429857 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:33.136870 1429857 retry.go:31] will retry after 2.044494816s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:33.607514 1429857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1209 05:45:33.665951 1429857 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:33.665984 1429857 retry.go:31] will retry after 3.185980177s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 05:45:33.684482 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	I1209 05:45:35.181952 1429857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1209 05:45:35.257647 1429857 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:35.257683 1429857 retry.go:31] will retry after 6.247119086s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:35.432935 1429857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1209 05:45:35.489710 1429857 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:35.489747 1429857 retry.go:31] will retry after 5.005761894s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 05:45:36.183633 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	I1209 05:45:36.853121 1429857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1209 05:45:36.910093 1429857 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:36.910124 1429857 retry.go:31] will retry after 2.260143685s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 05:45:38.684059 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	I1209 05:45:39.170535 1429857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1209 05:45:39.232801 1429857 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:39.232833 1429857 retry.go:31] will retry after 5.898281664s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:40.496123 1429857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1209 05:45:40.569501 1429857 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:40.569541 1429857 retry.go:31] will retry after 5.242247905s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 05:45:41.183647 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	I1209 05:45:41.505063 1429857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1209 05:45:41.569422 1429857 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:41.569455 1429857 retry.go:31] will retry after 4.503235869s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 05:45:43.683557 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	I1209 05:45:45.132421 1429857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1209 05:45:45.222154 1429857 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:45.222198 1429857 retry.go:31] will retry after 8.250619683s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 05:45:45.684059 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	I1209 05:45:45.812550 1429857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1209 05:45:45.881229 1429857 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:45.881261 1429857 retry.go:31] will retry after 8.251153137s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:46.073504 1429857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1209 05:45:46.133618 1429857 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:46.133649 1429857 retry.go:31] will retry after 8.692623616s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 05:45:47.684156 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:45:49.684420 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:45:52.184266 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	I1209 05:45:53.473746 1429857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1209 05:45:53.531667 1429857 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:53.531698 1429857 retry.go:31] will retry after 15.506930845s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:54.132979 1429857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1209 05:45:54.193191 1429857 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:54.193224 1429857 retry.go:31] will retry after 10.284746977s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 05:45:54.683975 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	I1209 05:45:54.827471 1429857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1209 05:45:54.889500 1429857 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:45:54.889532 1429857 retry.go:31] will retry after 18.446693624s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 05:45:56.684069 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:45:58.684566 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:46:01.183568 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:46:03.184493 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	I1209 05:46:04.478972 1429857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1209 05:46:04.538725 1429857 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:46:04.538770 1429857 retry.go:31] will retry after 23.738719196s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 05:46:05.684477 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:46:08.183831 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	I1209 05:46:09.038866 1429857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1209 05:46:09.093686 1429857 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:46:09.093718 1429857 retry.go:31] will retry after 26.248517502s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 05:46:10.184557 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:46:12.684037 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	I1209 05:46:13.337395 1429857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1209 05:46:13.395453 1429857 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:46:13.395482 1429857 retry.go:31] will retry after 20.604537862s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 05:46:14.684146 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:46:17.183632 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:46:19.184010 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:46:21.184436 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:46:23.684454 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:46:26.184391 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	I1209 05:46:28.277860 1429857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1209 05:46:28.338442 1429857 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:46:28.338477 1429857 retry.go:31] will retry after 19.859111094s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 05:46:28.684457 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:46:31.184269 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:46:33.683555 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	I1209 05:46:34.001016 1429857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1209 05:46:34.063987 1429857 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:46:34.064033 1429857 retry.go:31] will retry after 28.707309643s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:46:35.342890 1429857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1209 05:46:35.399451 1429857 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:46:35.399484 1429857 retry.go:31] will retry after 27.272034746s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 05:46:35.684281 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:46:38.184278 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:46:40.684368 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:46:42.684576 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:46:45.183976 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:46:47.683573 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	I1209 05:46:48.197871 1429857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1209 05:46:48.261674 1429857 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 05:46:48.261785 1429857 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1209 05:46:49.683828 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:46:51.684493 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:46:54.184206 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:46:56.683584 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:46:58.684546 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:47:01.184173 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	I1209 05:47:02.671757 1429857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1209 05:47:02.753978 1429857 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 05:47:02.754067 1429857 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1209 05:47:02.772232 1429857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1209 05:47:02.829567 1429857 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 05:47:02.829669 1429857 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1209 05:47:02.832581 1429857 out.go:179] * Enabled addons: 
	I1209 05:47:02.835625 1429857 addons.go:530] duration metric: took 1m36.002308157s for enable addons: enabled=[]
	W1209 05:47:03.184442 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:47:05.684309 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:47:07.684428 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:47:09.684532 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:47:12.184139 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:47:14.184410 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:47:16.684288 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:47:19.183449 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:47:21.184166 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:47:23.683505 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:47:25.684390 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:47:28.184325 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:47:30.184488 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:47:32.683874 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:47:34.684542 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:47:37.184070 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:47:39.684409 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:47:42.184768 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:47:44.684202 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:47:46.684646 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:47:49.184114 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:47:51.184388 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:47:53.683596 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:47:55.684345 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:47:58.184278 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:48:00.684415 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:48:03.184342 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:48:05.683500 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:48:07.684429 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:48:10.184405 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:48:12.684229 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:48:15.184267 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:48:17.684049 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:48:19.684520 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:48:22.183499 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:48:24.184521 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:48:26.684110 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:48:28.684278 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:48:31.184230 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:48:33.184335 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:48:35.184598 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:48:37.683518 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:48:40.183481 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:48:42.183843 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:48:44.184261 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:48:46.684428 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:48:49.184195 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:48:51.184658 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:48:53.683520 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:48:55.684601 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:48:58.184091 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	I1209 05:49:02.727137 1422398 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001512454s
	I1209 05:49:02.727164 1422398 kubeadm.go:319] 
	I1209 05:49:02.727221 1422398 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1209 05:49:02.727255 1422398 kubeadm.go:319] 	- The kubelet is not running
	I1209 05:49:02.727360 1422398 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1209 05:49:02.727366 1422398 kubeadm.go:319] 
	I1209 05:49:02.727470 1422398 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1209 05:49:02.727502 1422398 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1209 05:49:02.727533 1422398 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1209 05:49:02.727537 1422398 kubeadm.go:319] 
	I1209 05:49:02.737230 1422398 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1209 05:49:02.737653 1422398 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1209 05:49:02.737766 1422398 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1209 05:49:02.738004 1422398 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1209 05:49:02.738014 1422398 kubeadm.go:319] 
	I1209 05:49:02.738083 1422398 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1209 05:49:02.738133 1422398 kubeadm.go:403] duration metric: took 8m7.90723854s to StartCluster
	I1209 05:49:02.738172 1422398 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:49:02.738235 1422398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:49:02.766379 1422398 cri.go:89] found id: ""
	I1209 05:49:02.766408 1422398 logs.go:282] 0 containers: []
	W1209 05:49:02.766416 1422398 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:49:02.766423 1422398 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:49:02.766487 1422398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:49:02.790398 1422398 cri.go:89] found id: ""
	I1209 05:49:02.790423 1422398 logs.go:282] 0 containers: []
	W1209 05:49:02.790431 1422398 logs.go:284] No container was found matching "etcd"
	I1209 05:49:02.790437 1422398 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:49:02.790493 1422398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:49:02.814861 1422398 cri.go:89] found id: ""
	I1209 05:49:02.814885 1422398 logs.go:282] 0 containers: []
	W1209 05:49:02.814894 1422398 logs.go:284] No container was found matching "coredns"
	I1209 05:49:02.814900 1422398 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:49:02.814958 1422398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:49:02.838939 1422398 cri.go:89] found id: ""
	I1209 05:49:02.838964 1422398 logs.go:282] 0 containers: []
	W1209 05:49:02.838973 1422398 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:49:02.838979 1422398 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:49:02.839047 1422398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:49:02.863333 1422398 cri.go:89] found id: ""
	I1209 05:49:02.863398 1422398 logs.go:282] 0 containers: []
	W1209 05:49:02.863421 1422398 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:49:02.863440 1422398 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:49:02.863527 1422398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:49:02.887104 1422398 cri.go:89] found id: ""
	I1209 05:49:02.887134 1422398 logs.go:282] 0 containers: []
	W1209 05:49:02.887152 1422398 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:49:02.887159 1422398 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:49:02.887226 1422398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:49:02.911004 1422398 cri.go:89] found id: ""
	I1209 05:49:02.911031 1422398 logs.go:282] 0 containers: []
	W1209 05:49:02.911039 1422398 logs.go:284] No container was found matching "kindnet"
	I1209 05:49:02.911049 1422398 logs.go:123] Gathering logs for kubelet ...
	I1209 05:49:02.911061 1422398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:49:02.967158 1422398 logs.go:123] Gathering logs for dmesg ...
	I1209 05:49:02.967192 1422398 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:49:02.983481 1422398 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:49:02.983507 1422398 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:49:03.049617 1422398 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:49:03.041414    4809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:49:03.042038    4809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:49:03.043805    4809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:49:03.044324    4809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:49:03.045788    4809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:49:03.041414    4809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:49:03.042038    4809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:49:03.043805    4809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:49:03.044324    4809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:49:03.045788    4809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:49:03.049650 1422398 logs.go:123] Gathering logs for containerd ...
	I1209 05:49:03.049664 1422398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:49:03.089335 1422398 logs.go:123] Gathering logs for container status ...
	I1209 05:49:03.089371 1422398 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1209 05:49:03.115730 1422398 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001512454s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1209 05:49:03.115806 1422398 out.go:285] * 
	W1209 05:49:03.116006 1422398 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001512454s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1209 05:49:03.116054 1422398 out.go:285] * 
	W1209 05:49:03.118239 1422398 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 05:49:03.123846 1422398 out.go:203] 
	W1209 05:49:03.126762 1422398 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001512454s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1209 05:49:03.126805 1422398 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1209 05:49:03.126825 1422398 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1209 05:49:03.129920 1422398 out.go:203] 
	W1209 05:49:00.184450 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:49:02.684261 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:49:04.684331 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:49:06.684478 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:49:09.184095 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:49:11.184292 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:49:13.684249 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:49:16.184284 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:49:18.184331 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:49:20.684244 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:49:22.684516 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:49:25.184217 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:49:27.684547 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:49:30.183694 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:49:32.184203 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:49:34.684399 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:49:37.184427 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:49:39.684508 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:49:42.184491 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:49:44.684308 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:49:47.183570 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:49:49.184295 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:49:51.184522 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:49:53.684098 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:49:55.684468 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:49:58.184464 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:50:00.184677 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:50:02.683529 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:50:04.684429 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:50:07.184098 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:50:09.184594 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:50:11.684109 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:50:13.684235 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:50:15.684441 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:50:18.184274 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:50:20.184328 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:50:22.684389 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:50:25.183523 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:50:27.184424 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:50:29.684391 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:50:32.183586 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:50:34.184266 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:50:36.184451 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:50:38.684140 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:50:41.184274 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:50:43.184338 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 09 05:40:53 newest-cni-262540 containerd[755]: time="2025-12-09T05:40:53.083724360Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 09 05:40:53 newest-cni-262540 containerd[755]: time="2025-12-09T05:40:53.083744421Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 09 05:40:53 newest-cni-262540 containerd[755]: time="2025-12-09T05:40:53.083783443Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 09 05:40:53 newest-cni-262540 containerd[755]: time="2025-12-09T05:40:53.083797490Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 09 05:40:53 newest-cni-262540 containerd[755]: time="2025-12-09T05:40:53.083807000Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 09 05:40:53 newest-cni-262540 containerd[755]: time="2025-12-09T05:40:53.083818889Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 09 05:40:53 newest-cni-262540 containerd[755]: time="2025-12-09T05:40:53.083828136Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 09 05:40:53 newest-cni-262540 containerd[755]: time="2025-12-09T05:40:53.083838802Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 09 05:40:53 newest-cni-262540 containerd[755]: time="2025-12-09T05:40:53.083861349Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 09 05:40:53 newest-cni-262540 containerd[755]: time="2025-12-09T05:40:53.083894169Z" level=info msg="Connect containerd service"
	Dec 09 05:40:53 newest-cni-262540 containerd[755]: time="2025-12-09T05:40:53.084216441Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 09 05:40:53 newest-cni-262540 containerd[755]: time="2025-12-09T05:40:53.084737846Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 09 05:40:53 newest-cni-262540 containerd[755]: time="2025-12-09T05:40:53.102982113Z" level=info msg="Start subscribing containerd event"
	Dec 09 05:40:53 newest-cni-262540 containerd[755]: time="2025-12-09T05:40:53.103063719Z" level=info msg="Start recovering state"
	Dec 09 05:40:53 newest-cni-262540 containerd[755]: time="2025-12-09T05:40:53.104591037Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 09 05:40:53 newest-cni-262540 containerd[755]: time="2025-12-09T05:40:53.104654519Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 09 05:40:53 newest-cni-262540 containerd[755]: time="2025-12-09T05:40:53.142169815Z" level=info msg="Start event monitor"
	Dec 09 05:40:53 newest-cni-262540 containerd[755]: time="2025-12-09T05:40:53.142215894Z" level=info msg="Start cni network conf syncer for default"
	Dec 09 05:40:53 newest-cni-262540 containerd[755]: time="2025-12-09T05:40:53.142225797Z" level=info msg="Start streaming server"
	Dec 09 05:40:53 newest-cni-262540 containerd[755]: time="2025-12-09T05:40:53.142235594Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 09 05:40:53 newest-cni-262540 containerd[755]: time="2025-12-09T05:40:53.142243799Z" level=info msg="runtime interface starting up..."
	Dec 09 05:40:53 newest-cni-262540 containerd[755]: time="2025-12-09T05:40:53.142250239Z" level=info msg="starting plugins..."
	Dec 09 05:40:53 newest-cni-262540 containerd[755]: time="2025-12-09T05:40:53.142262826Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 09 05:40:53 newest-cni-262540 containerd[755]: time="2025-12-09T05:40:53.142386818Z" level=info msg="containerd successfully booted in 0.079198s"
	Dec 09 05:40:53 newest-cni-262540 systemd[1]: Started containerd.service - containerd container runtime.
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:50:46.257533    5919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:50:46.258372    5919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:50:46.259847    5919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:50:46.260266    5919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:50:46.261697    5919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +25.904254] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:14] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:16] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:18] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:19] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:30] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:32] overlayfs: idmapped layers are currently not supported
	[ +28.114653] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:33] overlayfs: idmapped layers are currently not supported
	[ +23.720849] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:34] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:35] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:36] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:37] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:38] overlayfs: idmapped layers are currently not supported
	[ +23.656275] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:39] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:57] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:58] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:00] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:02] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:03] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:05] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:06] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 9 05:31] overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	
	
	==> kernel <==
	 05:50:46 up  8:32,  0 user,  load average: 0.60, 0.68, 1.32
	Linux newest-cni-262540 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 09 05:50:43 newest-cni-262540 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 09 05:50:43 newest-cni-262540 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 455.
	Dec 09 05:50:43 newest-cni-262540 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 05:50:43 newest-cni-262540 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 05:50:43 newest-cni-262540 kubelet[5799]: E1209 05:50:43.976929    5799 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 09 05:50:43 newest-cni-262540 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 09 05:50:43 newest-cni-262540 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 09 05:50:44 newest-cni-262540 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 456.
	Dec 09 05:50:44 newest-cni-262540 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 05:50:44 newest-cni-262540 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 05:50:44 newest-cni-262540 kubelet[5804]: E1209 05:50:44.727302    5804 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 09 05:50:44 newest-cni-262540 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 09 05:50:44 newest-cni-262540 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 09 05:50:45 newest-cni-262540 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 457.
	Dec 09 05:50:45 newest-cni-262540 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 05:50:45 newest-cni-262540 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 05:50:45 newest-cni-262540 kubelet[5817]: E1209 05:50:45.488903    5817 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 09 05:50:45 newest-cni-262540 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 09 05:50:45 newest-cni-262540 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 09 05:50:46 newest-cni-262540 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 458.
	Dec 09 05:50:46 newest-cni-262540 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 05:50:46 newest-cni-262540 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 05:50:46 newest-cni-262540 kubelet[5911]: E1209 05:50:46.246899    5911 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 09 05:50:46 newest-cni-262540 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 09 05:50:46 newest-cni-262540 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-262540 -n newest-cni-262540
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-262540 -n newest-cni-262540: exit status 6 (358.928548ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1209 05:50:46.812955 1436824 status.go:458] kubeconfig endpoint: get endpoint: "newest-cni-262540" does not appear in /home/jenkins/minikube-integration/22081-1142328/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "newest-cni-262540" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (102.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (374.48s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-262540 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
E1209 05:51:06.732385 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/addons-221952/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p newest-cni-262540 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: exit status 105 (6m9.411279235s)

                                                
                                                
-- stdout --
	* [newest-cni-262540] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22081
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22081-1142328/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-1142328/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "newest-cni-262540" primary control-plane node in "newest-cni-262540" cluster
	* Pulling base image v0.0.48-1765184860-22066 ...
	* Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	  - kubeadm.pod-network-cidr=10.42.0.0/16
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image registry.k8s.io/echoserver:1.4
	* Enabled addons: 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 05:50:48.368732 1437114 out.go:360] Setting OutFile to fd 1 ...
	I1209 05:50:48.368913 1437114 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 05:50:48.368940 1437114 out.go:374] Setting ErrFile to fd 2...
	I1209 05:50:48.368958 1437114 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 05:50:48.369216 1437114 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-1142328/.minikube/bin
	I1209 05:50:48.369601 1437114 out.go:368] Setting JSON to false
	I1209 05:50:48.370536 1437114 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":30772,"bootTime":1765228677,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1209 05:50:48.370622 1437114 start.go:143] virtualization:  
	I1209 05:50:48.373806 1437114 out.go:179] * [newest-cni-262540] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1209 05:50:48.377517 1437114 out.go:179]   - MINIKUBE_LOCATION=22081
	I1209 05:50:48.377579 1437114 notify.go:221] Checking for updates...
	I1209 05:50:48.383314 1437114 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 05:50:48.386284 1437114 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22081-1142328/kubeconfig
	I1209 05:50:48.389132 1437114 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-1142328/.minikube
	I1209 05:50:48.392076 1437114 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1209 05:50:48.394975 1437114 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 05:50:48.398361 1437114 config.go:182] Loaded profile config "newest-cni-262540": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1209 05:50:48.398977 1437114 driver.go:422] Setting default libvirt URI to qemu:///system
	I1209 05:50:48.429565 1437114 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1209 05:50:48.429674 1437114 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 05:50:48.493190 1437114 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-09 05:50:48.483865172 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1209 05:50:48.493298 1437114 docker.go:319] overlay module found
	I1209 05:50:48.496461 1437114 out.go:179] * Using the docker driver based on existing profile
	I1209 05:50:48.499256 1437114 start.go:309] selected driver: docker
	I1209 05:50:48.499276 1437114 start.go:927] validating driver "docker" against &{Name:newest-cni-262540 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-262540 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString
: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 05:50:48.499393 1437114 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 05:50:48.500188 1437114 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 05:50:48.552839 1437114 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-09 05:50:48.544121972 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1209 05:50:48.553181 1437114 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1209 05:50:48.553214 1437114 cni.go:84] Creating CNI manager for ""
	I1209 05:50:48.553271 1437114 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1209 05:50:48.553312 1437114 start.go:353] cluster config:
	{Name:newest-cni-262540 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-262540 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 05:50:48.558270 1437114 out.go:179] * Starting "newest-cni-262540" primary control-plane node in "newest-cni-262540" cluster
	I1209 05:50:48.560987 1437114 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1209 05:50:48.563913 1437114 out.go:179] * Pulling base image v0.0.48-1765184860-22066 ...
	I1209 05:50:48.566628 1437114 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1209 05:50:48.566677 1437114 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1209 05:50:48.566701 1437114 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local docker daemon
	I1209 05:50:48.566709 1437114 cache.go:65] Caching tarball of preloaded images
	I1209 05:50:48.566793 1437114 preload.go:238] Found /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1209 05:50:48.566803 1437114 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1209 05:50:48.566914 1437114 profile.go:143] Saving config to /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/config.json ...
	I1209 05:50:48.585366 1437114 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local docker daemon, skipping pull
	I1209 05:50:48.585390 1437114 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c exists in daemon, skipping load
	I1209 05:50:48.585410 1437114 cache.go:243] Successfully downloaded all kic artifacts
	I1209 05:50:48.585447 1437114 start.go:360] acquireMachinesLock for newest-cni-262540: {Name:mk272d84ff1bc8c8949f2f0b1f608a7519899d10 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 05:50:48.585504 1437114 start.go:364] duration metric: took 35.806µs to acquireMachinesLock for "newest-cni-262540"
	I1209 05:50:48.585529 1437114 start.go:96] Skipping create...Using existing machine configuration
	I1209 05:50:48.585539 1437114 fix.go:54] fixHost starting: 
	I1209 05:50:48.585799 1437114 cli_runner.go:164] Run: docker container inspect newest-cni-262540 --format={{.State.Status}}
	I1209 05:50:48.601614 1437114 fix.go:112] recreateIfNeeded on newest-cni-262540: state=Stopped err=<nil>
	W1209 05:50:48.601645 1437114 fix.go:138] unexpected machine state, will restart: <nil>
	I1209 05:50:48.604910 1437114 out.go:252] * Restarting existing docker container for "newest-cni-262540" ...
	I1209 05:50:48.604997 1437114 cli_runner.go:164] Run: docker start newest-cni-262540
	I1209 05:50:48.871934 1437114 cli_runner.go:164] Run: docker container inspect newest-cni-262540 --format={{.State.Status}}
	I1209 05:50:48.896820 1437114 kic.go:430] container "newest-cni-262540" state is running.
	I1209 05:50:48.898586 1437114 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-262540
	I1209 05:50:48.919622 1437114 profile.go:143] Saving config to /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/config.json ...
	I1209 05:50:48.919952 1437114 machine.go:94] provisionDockerMachine start ...
	I1209 05:50:48.920090 1437114 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-262540
	I1209 05:50:48.944382 1437114 main.go:143] libmachine: Using SSH client type: native
	I1209 05:50:48.944721 1437114 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 34215 <nil> <nil>}
	I1209 05:50:48.944730 1437114 main.go:143] libmachine: About to run SSH command:
	hostname
	I1209 05:50:48.945423 1437114 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:54144->127.0.0.1:34215: read: connection reset by peer
	I1209 05:50:52.103931 1437114 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-262540
	
	I1209 05:50:52.103958 1437114 ubuntu.go:182] provisioning hostname "newest-cni-262540"
	I1209 05:50:52.104072 1437114 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-262540
	I1209 05:50:52.121462 1437114 main.go:143] libmachine: Using SSH client type: native
	I1209 05:50:52.121778 1437114 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 34215 <nil> <nil>}
	I1209 05:50:52.121795 1437114 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-262540 && echo "newest-cni-262540" | sudo tee /etc/hostname
	I1209 05:50:52.280621 1437114 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-262540
	
	I1209 05:50:52.280705 1437114 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-262540
	I1209 05:50:52.301681 1437114 main.go:143] libmachine: Using SSH client type: native
	I1209 05:50:52.301997 1437114 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 34215 <nil> <nil>}
	I1209 05:50:52.302019 1437114 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-262540' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-262540/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-262540' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 05:50:52.452274 1437114 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1209 05:50:52.452304 1437114 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22081-1142328/.minikube CaCertPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22081-1142328/.minikube}
	I1209 05:50:52.452324 1437114 ubuntu.go:190] setting up certificates
	I1209 05:50:52.452332 1437114 provision.go:84] configureAuth start
	I1209 05:50:52.452391 1437114 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-262540
	I1209 05:50:52.475825 1437114 provision.go:143] copyHostCerts
	I1209 05:50:52.475907 1437114 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.pem, removing ...
	I1209 05:50:52.475921 1437114 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.pem
	I1209 05:50:52.475999 1437114 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.pem (1078 bytes)
	I1209 05:50:52.476136 1437114 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-1142328/.minikube/cert.pem, removing ...
	I1209 05:50:52.476147 1437114 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-1142328/.minikube/cert.pem
	I1209 05:50:52.476175 1437114 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22081-1142328/.minikube/cert.pem (1123 bytes)
	I1209 05:50:52.476288 1437114 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-1142328/.minikube/key.pem, removing ...
	I1209 05:50:52.476322 1437114 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-1142328/.minikube/key.pem
	I1209 05:50:52.476364 1437114 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22081-1142328/.minikube/key.pem (1675 bytes)
	I1209 05:50:52.476440 1437114 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca-key.pem org=jenkins.newest-cni-262540 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-262540]
	I1209 05:50:52.561012 1437114 provision.go:177] copyRemoteCerts
	I1209 05:50:52.561084 1437114 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 05:50:52.561133 1437114 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-262540
	I1209 05:50:52.578674 1437114 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34215 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/newest-cni-262540/id_rsa Username:docker}
	I1209 05:50:52.685758 1437114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1209 05:50:52.702408 1437114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1209 05:50:52.719173 1437114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1671 bytes)
	I1209 05:50:52.736435 1437114 provision.go:87] duration metric: took 284.081054ms to configureAuth
	I1209 05:50:52.736462 1437114 ubuntu.go:206] setting minikube options for container-runtime
	I1209 05:50:52.736672 1437114 config.go:182] Loaded profile config "newest-cni-262540": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1209 05:50:52.736698 1437114 machine.go:97] duration metric: took 3.816733312s to provisionDockerMachine
	I1209 05:50:52.736707 1437114 start.go:293] postStartSetup for "newest-cni-262540" (driver="docker")
	I1209 05:50:52.736719 1437114 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 05:50:52.736771 1437114 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 05:50:52.736819 1437114 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-262540
	I1209 05:50:52.753733 1437114 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34215 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/newest-cni-262540/id_rsa Username:docker}
	I1209 05:50:52.859644 1437114 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 05:50:52.862806 1437114 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1209 05:50:52.862830 1437114 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1209 05:50:52.862841 1437114 filesync.go:126] Scanning /home/jenkins/minikube-integration/22081-1142328/.minikube/addons for local assets ...
	I1209 05:50:52.862893 1437114 filesync.go:126] Scanning /home/jenkins/minikube-integration/22081-1142328/.minikube/files for local assets ...
	I1209 05:50:52.862974 1437114 filesync.go:149] local asset: /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/ssl/certs/11442312.pem -> 11442312.pem in /etc/ssl/certs
	I1209 05:50:52.863076 1437114 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 05:50:52.870063 1437114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/ssl/certs/11442312.pem --> /etc/ssl/certs/11442312.pem (1708 bytes)
	I1209 05:50:52.886852 1437114 start.go:296] duration metric: took 150.129481ms for postStartSetup
	I1209 05:50:52.886932 1437114 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1209 05:50:52.887020 1437114 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-262540
	I1209 05:50:52.904086 1437114 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34215 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/newest-cni-262540/id_rsa Username:docker}
	I1209 05:50:53.006063 1437114 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1209 05:50:53.011716 1437114 fix.go:56] duration metric: took 4.426170276s for fixHost
	I1209 05:50:53.011745 1437114 start.go:83] releasing machines lock for "newest-cni-262540", held for 4.426228294s
	I1209 05:50:53.011812 1437114 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-262540
	I1209 05:50:53.028468 1437114 ssh_runner.go:195] Run: cat /version.json
	I1209 05:50:53.028532 1437114 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-262540
	I1209 05:50:53.028815 1437114 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 05:50:53.028886 1437114 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-262540
	I1209 05:50:53.050698 1437114 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34215 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/newest-cni-262540/id_rsa Username:docker}
	I1209 05:50:53.061651 1437114 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34215 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/newest-cni-262540/id_rsa Username:docker}
	I1209 05:50:53.151708 1437114 ssh_runner.go:195] Run: systemctl --version
	I1209 05:50:53.249572 1437114 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 05:50:53.254184 1437114 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 05:50:53.254256 1437114 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 05:50:53.261725 1437114 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1209 05:50:53.261749 1437114 start.go:496] detecting cgroup driver to use...
	I1209 05:50:53.261780 1437114 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1209 05:50:53.261828 1437114 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1209 05:50:53.278531 1437114 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1209 05:50:53.291190 1437114 docker.go:218] disabling cri-docker service (if available) ...
	I1209 05:50:53.291252 1437114 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 05:50:53.306525 1437114 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 05:50:53.319477 1437114 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 05:50:53.424347 1437114 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 05:50:53.539911 1437114 docker.go:234] disabling docker service ...
	I1209 05:50:53.540005 1437114 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 05:50:53.555506 1437114 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 05:50:53.568379 1437114 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 05:50:53.684143 1437114 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 05:50:53.819865 1437114 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 05:50:53.834400 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 05:50:53.848555 1437114 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1209 05:50:53.857346 1437114 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1209 05:50:53.866232 1437114 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1209 05:50:53.866362 1437114 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1209 05:50:53.875141 1437114 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1209 05:50:53.883775 1437114 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1209 05:50:53.892743 1437114 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1209 05:50:53.901606 1437114 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 05:50:53.909694 1437114 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1209 05:50:53.918469 1437114 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1209 05:50:53.927272 1437114 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1209 05:50:53.939275 1437114 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 05:50:53.948029 1437114 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 05:50:53.956257 1437114 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 05:50:54.075166 1437114 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1209 05:50:54.195479 1437114 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1209 05:50:54.195546 1437114 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1209 05:50:54.199412 1437114 start.go:564] Will wait 60s for crictl version
	I1209 05:50:54.199478 1437114 ssh_runner.go:195] Run: which crictl
	I1209 05:50:54.203349 1437114 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1209 05:50:54.229036 1437114 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1209 05:50:54.229147 1437114 ssh_runner.go:195] Run: containerd --version
	I1209 05:50:54.257755 1437114 ssh_runner.go:195] Run: containerd --version
	I1209 05:50:54.281890 1437114 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1209 05:50:54.284780 1437114 cli_runner.go:164] Run: docker network inspect newest-cni-262540 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1209 05:50:54.300458 1437114 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1209 05:50:54.304227 1437114 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 05:50:54.316829 1437114 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1209 05:50:54.319602 1437114 kubeadm.go:884] updating cluster {Name:newest-cni-262540 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-262540 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 05:50:54.319761 1437114 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1209 05:50:54.319850 1437114 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 05:50:54.344882 1437114 containerd.go:627] all images are preloaded for containerd runtime.
	I1209 05:50:54.344907 1437114 containerd.go:534] Images already preloaded, skipping extraction
	I1209 05:50:54.344969 1437114 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 05:50:54.368351 1437114 containerd.go:627] all images are preloaded for containerd runtime.
	I1209 05:50:54.368375 1437114 cache_images.go:86] Images are preloaded, skipping loading
	I1209 05:50:54.368384 1437114 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1209 05:50:54.368487 1437114 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-262540 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-262540 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 05:50:54.368554 1437114 ssh_runner.go:195] Run: sudo crictl info
	I1209 05:50:54.396480 1437114 cni.go:84] Creating CNI manager for ""
	I1209 05:50:54.396505 1437114 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1209 05:50:54.396527 1437114 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1209 05:50:54.396551 1437114 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-262540 NodeName:newest-cni-262540 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1209 05:50:54.396668 1437114 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-262540"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 05:50:54.396755 1437114 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1209 05:50:54.404357 1437114 binaries.go:51] Found k8s binaries, skipping transfer
	I1209 05:50:54.404462 1437114 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1209 05:50:54.411829 1437114 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1209 05:50:54.423915 1437114 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1209 05:50:54.436484 1437114 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1209 05:50:54.448905 1437114 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1209 05:50:54.452398 1437114 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 05:50:54.461840 1437114 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 05:50:54.574379 1437114 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 05:50:54.590263 1437114 certs.go:69] Setting up /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540 for IP: 192.168.76.2
	I1209 05:50:54.590332 1437114 certs.go:195] generating shared ca certs ...
	I1209 05:50:54.590364 1437114 certs.go:227] acquiring lock for ca certs: {Name:mk15788702f8c4e23b5aeab3f44961d296fab259 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 05:50:54.590561 1437114 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.key
	I1209 05:50:54.590652 1437114 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/proxy-client-ca.key
	I1209 05:50:54.590688 1437114 certs.go:257] generating profile certs ...
	I1209 05:50:54.590838 1437114 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/client.key
	I1209 05:50:54.590942 1437114 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/apiserver.key.0ed49b31
	I1209 05:50:54.591051 1437114 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/proxy-client.key
	I1209 05:50:54.591210 1437114 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/1144231.pem (1338 bytes)
	W1209 05:50:54.591287 1437114 certs.go:480] ignoring /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/1144231_empty.pem, impossibly tiny 0 bytes
	I1209 05:50:54.591314 1437114 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca-key.pem (1679 bytes)
	I1209 05:50:54.591380 1437114 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem (1078 bytes)
	I1209 05:50:54.591442 1437114 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/cert.pem (1123 bytes)
	I1209 05:50:54.591490 1437114 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/key.pem (1675 bytes)
	I1209 05:50:54.591576 1437114 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/ssl/certs/11442312.pem (1708 bytes)
	I1209 05:50:54.592436 1437114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 05:50:54.617399 1437114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1209 05:50:54.636943 1437114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 05:50:54.658494 1437114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1209 05:50:54.674958 1437114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1209 05:50:54.701134 1437114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1209 05:50:54.720347 1437114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 05:50:54.738904 1437114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1209 05:50:54.758253 1437114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/1144231.pem --> /usr/share/ca-certificates/1144231.pem (1338 bytes)
	I1209 05:50:54.775204 1437114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/ssl/certs/11442312.pem --> /usr/share/ca-certificates/11442312.pem (1708 bytes)
	I1209 05:50:54.791963 1437114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 05:50:54.809403 1437114 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 05:50:54.821958 1437114 ssh_runner.go:195] Run: openssl version
	I1209 05:50:54.828113 1437114 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1209 05:50:54.835305 1437114 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1209 05:50:54.842458 1437114 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 05:50:54.846155 1437114 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 04:09 /usr/share/ca-certificates/minikubeCA.pem
	I1209 05:50:54.846222 1437114 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 05:50:54.887330 1437114 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1209 05:50:54.894630 1437114 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1144231.pem
	I1209 05:50:54.901722 1437114 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1144231.pem /etc/ssl/certs/1144231.pem
	I1209 05:50:54.909025 1437114 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1144231.pem
	I1209 05:50:54.912514 1437114 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 04:18 /usr/share/ca-certificates/1144231.pem
	I1209 05:50:54.912621 1437114 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1144231.pem
	I1209 05:50:54.953649 1437114 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1209 05:50:54.960781 1437114 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/11442312.pem
	I1209 05:50:54.967822 1437114 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/11442312.pem /etc/ssl/certs/11442312.pem
	I1209 05:50:54.975177 1437114 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11442312.pem
	I1209 05:50:54.978699 1437114 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 04:18 /usr/share/ca-certificates/11442312.pem
	I1209 05:50:54.978782 1437114 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11442312.pem
	I1209 05:50:55.020640 1437114 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1209 05:50:55.034989 1437114 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 05:50:55.043885 1437114 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1209 05:50:55.090059 1437114 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1209 05:50:55.134954 1437114 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1209 05:50:55.180095 1437114 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1209 05:50:55.223090 1437114 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1209 05:50:55.265103 1437114 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1209 05:50:55.306238 1437114 kubeadm.go:401] StartCluster: {Name:newest-cni-262540 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-262540 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 05:50:55.306348 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1209 05:50:55.306413 1437114 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 05:50:55.335032 1437114 cri.go:89] found id: ""
	I1209 05:50:55.335115 1437114 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1209 05:50:55.355619 1437114 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1209 05:50:55.355640 1437114 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1209 05:50:55.355691 1437114 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1209 05:50:55.363844 1437114 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1209 05:50:55.364433 1437114 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-262540" does not appear in /home/jenkins/minikube-integration/22081-1142328/kubeconfig
	I1209 05:50:55.364754 1437114 kubeconfig.go:62] /home/jenkins/minikube-integration/22081-1142328/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-262540" cluster setting kubeconfig missing "newest-cni-262540" context setting]
	I1209 05:50:55.365251 1437114 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1142328/kubeconfig: {Name:mk0dc127429f88dc4fdfb2d110deebc58207c1b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 05:50:55.366765 1437114 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1209 05:50:55.375221 1437114 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1209 05:50:55.375252 1437114 kubeadm.go:602] duration metric: took 19.605753ms to restartPrimaryControlPlane
	I1209 05:50:55.375261 1437114 kubeadm.go:403] duration metric: took 69.033781ms to StartCluster
	I1209 05:50:55.375276 1437114 settings.go:142] acquiring lock: {Name:mk8fa744e3d74bf8a1cbf5ac275c9f1969ad91a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 05:50:55.375345 1437114 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22081-1142328/kubeconfig
	I1209 05:50:55.376265 1437114 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1142328/kubeconfig: {Name:mk0dc127429f88dc4fdfb2d110deebc58207c1b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 05:50:55.376705 1437114 config.go:182] Loaded profile config "newest-cni-262540": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1209 05:50:55.376504 1437114 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1209 05:50:55.376810 1437114 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1209 05:50:55.377093 1437114 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-262540"
	I1209 05:50:55.377111 1437114 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-262540"
	I1209 05:50:55.377136 1437114 host.go:66] Checking if "newest-cni-262540" exists ...
	I1209 05:50:55.377594 1437114 cli_runner.go:164] Run: docker container inspect newest-cni-262540 --format={{.State.Status}}
	I1209 05:50:55.377785 1437114 addons.go:70] Setting dashboard=true in profile "newest-cni-262540"
	I1209 05:50:55.377813 1437114 addons.go:239] Setting addon dashboard=true in "newest-cni-262540"
	W1209 05:50:55.377825 1437114 addons.go:248] addon dashboard should already be in state true
	I1209 05:50:55.377849 1437114 host.go:66] Checking if "newest-cni-262540" exists ...
	I1209 05:50:55.378304 1437114 cli_runner.go:164] Run: docker container inspect newest-cni-262540 --format={{.State.Status}}
	I1209 05:50:55.378820 1437114 addons.go:70] Setting default-storageclass=true in profile "newest-cni-262540"
	I1209 05:50:55.378864 1437114 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-262540"
	I1209 05:50:55.379212 1437114 cli_runner.go:164] Run: docker container inspect newest-cni-262540 --format={{.State.Status}}
	I1209 05:50:55.381896 1437114 out.go:179] * Verifying Kubernetes components...
	I1209 05:50:55.388614 1437114 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 05:50:55.438264 1437114 addons.go:239] Setting addon default-storageclass=true in "newest-cni-262540"
	I1209 05:50:55.438303 1437114 host.go:66] Checking if "newest-cni-262540" exists ...
	I1209 05:50:55.438728 1437114 cli_runner.go:164] Run: docker container inspect newest-cni-262540 --format={{.State.Status}}
	I1209 05:50:55.440785 1437114 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 05:50:55.442715 1437114 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 05:50:55.442743 1437114 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1209 05:50:55.442806 1437114 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-262540
	I1209 05:50:55.442947 1437114 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1209 05:50:55.445621 1437114 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1209 05:50:55.449877 1437114 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1209 05:50:55.449904 1437114 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1209 05:50:55.449976 1437114 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-262540
	I1209 05:50:55.481759 1437114 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34215 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/newest-cni-262540/id_rsa Username:docker}
	I1209 05:50:55.496417 1437114 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1209 05:50:55.496440 1437114 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1209 05:50:55.496499 1437114 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-262540
	I1209 05:50:55.515362 1437114 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34215 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/newest-cni-262540/id_rsa Username:docker}
	I1209 05:50:55.537402 1437114 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34215 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/newest-cni-262540/id_rsa Username:docker}
	I1209 05:50:55.642792 1437114 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 05:50:55.677774 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 05:50:55.711653 1437114 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1209 05:50:55.711691 1437114 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1209 05:50:55.713691 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1209 05:50:55.771340 1437114 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1209 05:50:55.771368 1437114 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1209 05:50:55.785331 1437114 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1209 05:50:55.785403 1437114 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1209 05:50:55.798961 1437114 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1209 05:50:55.798984 1437114 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1209 05:50:55.811558 1437114 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1209 05:50:55.811625 1437114 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1209 05:50:55.824010 1437114 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1209 05:50:55.824113 1437114 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1209 05:50:55.836722 1437114 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1209 05:50:55.836745 1437114 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1209 05:50:55.849061 1437114 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1209 05:50:55.849126 1437114 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1209 05:50:55.862091 1437114 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1209 05:50:55.862114 1437114 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1209 05:50:55.875010 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1209 05:50:56.435552 1437114 api_server.go:52] waiting for apiserver process to appear ...
	W1209 05:50:56.435748 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:50:56.435801 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:50:56.435838 1437114 retry.go:31] will retry after 228.095144ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 05:50:56.435700 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:50:56.435898 1437114 retry.go:31] will retry after 361.053359ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 05:50:56.436142 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:50:56.436189 1437114 retry.go:31] will retry after 212.683869ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:50:56.649580 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1209 05:50:56.665010 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1209 05:50:56.729564 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:50:56.729662 1437114 retry.go:31] will retry after 263.201205ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 05:50:56.751560 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:50:56.751590 1437114 retry.go:31] will retry after 282.08987ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:50:56.797828 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1209 05:50:56.855489 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:50:56.855525 1437114 retry.go:31] will retry after 519.882573ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:50:56.936655 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:50:56.993111 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1209 05:50:57.034512 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1209 05:50:57.059780 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:50:57.059861 1437114 retry.go:31] will retry after 724.517068ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 05:50:57.095702 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:50:57.095733 1437114 retry.go:31] will retry after 773.591416ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:50:57.376312 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1209 05:50:57.435557 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:50:57.435589 1437114 retry.go:31] will retry after 453.196958ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:50:57.436773 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:50:57.784620 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1209 05:50:57.844755 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:50:57.844791 1437114 retry.go:31] will retry after 1.262011023s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:50:57.869923 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 05:50:57.889536 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 05:50:57.936212 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1209 05:50:57.961431 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:50:57.961468 1437114 retry.go:31] will retry after 546.501311ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 05:50:58.032466 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:50:58.032501 1437114 retry.go:31] will retry after 1.229436669s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:50:58.436310 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:50:58.508935 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1209 05:50:58.565163 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:50:58.565196 1437114 retry.go:31] will retry after 1.407912766s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:50:58.936676 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:50:59.107417 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1209 05:50:59.166291 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:50:59.166364 1437114 retry.go:31] will retry after 928.374807ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:50:59.262572 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1209 05:50:59.321942 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:50:59.321975 1437114 retry.go:31] will retry after 837.961471ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:50:59.436172 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:50:59.936839 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:50:59.973278 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 05:51:00.094961 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1209 05:51:00.122388 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:00.122508 1437114 retry.go:31] will retry after 2.37581771s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:00.163516 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1209 05:51:00.369038 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:00.369122 1437114 retry.go:31] will retry after 1.02409357s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 05:51:00.430845 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:00.430881 1437114 retry.go:31] will retry after 1.008529781s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:00.435975 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:00.935928 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:01.393811 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1209 05:51:01.436520 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:01.440060 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1209 05:51:01.479948 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:01.480008 1437114 retry.go:31] will retry after 3.887040249s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 05:51:01.521362 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:01.521394 1437114 retry.go:31] will retry after 2.488257731s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:01.936891 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:02.436059 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:02.499505 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1209 05:51:02.558807 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:02.558839 1437114 retry.go:31] will retry after 1.68559081s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:02.936227 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:03.436252 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:03.936492 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:04.009914 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1209 05:51:04.068567 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:04.068604 1437114 retry.go:31] will retry after 3.558332748s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:04.244680 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1209 05:51:04.309239 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:04.309330 1437114 retry.go:31] will retry after 5.213787505s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:04.436559 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:04.936651 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:05.367810 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1209 05:51:05.433548 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:05.433586 1437114 retry.go:31] will retry after 5.477878375s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:05.436872 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:05.936073 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:06.436593 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:06.936543 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:07.436871 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:07.628150 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1209 05:51:07.690629 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:07.690661 1437114 retry.go:31] will retry after 6.157660473s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:07.935908 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:08.436122 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:08.935959 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:09.436970 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:09.523671 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1209 05:51:09.581839 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:09.581914 1437114 retry.go:31] will retry after 9.601279523s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:09.936233 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:10.436178 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:10.911744 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1209 05:51:10.936618 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1209 05:51:11.040149 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:11.040187 1437114 retry.go:31] will retry after 9.211684326s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:11.436896 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:11.936862 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:12.435946 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:12.936781 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:13.436827 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:13.848647 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1209 05:51:13.909374 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:13.909406 1437114 retry.go:31] will retry after 5.044533036s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:13.936521 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:14.436557 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:14.935977 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:15.436310 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:15.936335 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:16.436628 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:16.936535 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:17.436311 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:17.935962 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:18.435898 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:18.936142 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:18.955073 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1209 05:51:19.020072 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:19.020104 1437114 retry.go:31] will retry after 11.951102235s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:19.184688 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1209 05:51:19.284505 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:19.284538 1437114 retry.go:31] will retry after 12.030085055s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:19.435928 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:19.936763 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:20.252740 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1209 05:51:20.316752 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:20.316784 1437114 retry.go:31] will retry after 7.019613017s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:20.436227 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:20.936875 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:21.435907 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:21.935963 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:22.436158 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:22.936474 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:23.436353 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:23.936003 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:24.435917 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:24.936039 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:25.435883 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:25.936680 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:26.436359 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:26.936582 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:27.336866 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1209 05:51:27.401213 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:27.401248 1437114 retry.go:31] will retry after 15.185111317s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:27.436540 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:27.936409 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:28.436146 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:28.936943 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:29.435893 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:29.936169 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:30.435922 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:30.936805 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:30.972257 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1209 05:51:31.030985 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:31.031019 1437114 retry.go:31] will retry after 20.454574576s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:31.315422 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1209 05:51:31.375282 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:31.375315 1437114 retry.go:31] will retry after 20.731698158s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:31.436402 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:31.936683 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:32.436139 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:32.936168 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:33.436458 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:33.936647 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:34.435986 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:34.935949 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:35.436254 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:35.936501 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:36.436171 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:36.936413 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:37.436503 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:37.936112 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:38.436260 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:38.936155 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:39.435919 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:39.935963 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:40.435931 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:40.936251 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:41.435937 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:41.936193 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:42.436356 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:42.587277 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1209 05:51:42.649100 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:42.649137 1437114 retry.go:31] will retry after 20.728553891s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:42.936771 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:43.435958 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:43.936674 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:44.436708 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:44.936177 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:45.436620 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:45.936616 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:46.436000 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:46.936141 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:47.435976 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:47.936139 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:48.436162 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:48.936736 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:49.436154 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:49.936192 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:50.436517 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:50.936806 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:51.436499 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:51.485950 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1209 05:51:51.548585 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:51.548614 1437114 retry.go:31] will retry after 47.596790172s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:51.936087 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:52.108051 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1209 05:51:52.167486 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:52.167519 1437114 retry.go:31] will retry after 29.777424896s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:52.436906 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:52.936203 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:53.436751 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:53.936576 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:54.436593 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:54.935988 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:55.436246 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:51:55.436382 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:51:55.467996 1437114 cri.go:89] found id: ""
	I1209 05:51:55.468084 1437114 logs.go:282] 0 containers: []
	W1209 05:51:55.468107 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:51:55.468125 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:51:55.468223 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:51:55.504401 1437114 cri.go:89] found id: ""
	I1209 05:51:55.504427 1437114 logs.go:282] 0 containers: []
	W1209 05:51:55.504434 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:51:55.504440 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:51:55.504513 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:51:55.530581 1437114 cri.go:89] found id: ""
	I1209 05:51:55.530606 1437114 logs.go:282] 0 containers: []
	W1209 05:51:55.530615 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:51:55.530621 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:51:55.530689 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:51:55.555637 1437114 cri.go:89] found id: ""
	I1209 05:51:55.555708 1437114 logs.go:282] 0 containers: []
	W1209 05:51:55.555744 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:51:55.555768 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:51:55.555867 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:51:55.582108 1437114 cri.go:89] found id: ""
	I1209 05:51:55.582132 1437114 logs.go:282] 0 containers: []
	W1209 05:51:55.582141 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:51:55.582148 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:51:55.582242 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:51:55.606067 1437114 cri.go:89] found id: ""
	I1209 05:51:55.606092 1437114 logs.go:282] 0 containers: []
	W1209 05:51:55.606101 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:51:55.606119 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:51:55.606179 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:51:55.632387 1437114 cri.go:89] found id: ""
	I1209 05:51:55.632413 1437114 logs.go:282] 0 containers: []
	W1209 05:51:55.632422 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:51:55.632428 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:51:55.632489 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:51:55.657181 1437114 cri.go:89] found id: ""
	I1209 05:51:55.657207 1437114 logs.go:282] 0 containers: []
	W1209 05:51:55.657215 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:51:55.657224 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:51:55.657236 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:51:55.718829 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:51:55.710893    1841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:51:55.711561    1841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:51:55.713071    1841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:51:55.713520    1841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:51:55.714997    1841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:51:55.710893    1841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:51:55.711561    1841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:51:55.713071    1841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:51:55.713520    1841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:51:55.714997    1841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:51:55.718849 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:51:55.718861 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:51:55.745044 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:51:55.745076 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:51:55.779273 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:51:55.779300 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:51:55.836724 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:51:55.836759 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:51:58.354526 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:58.364806 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:51:58.364873 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:51:58.394168 1437114 cri.go:89] found id: ""
	I1209 05:51:58.394193 1437114 logs.go:282] 0 containers: []
	W1209 05:51:58.394201 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:51:58.394213 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:51:58.394269 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:51:58.419742 1437114 cri.go:89] found id: ""
	I1209 05:51:58.419776 1437114 logs.go:282] 0 containers: []
	W1209 05:51:58.419785 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:51:58.419792 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:51:58.419859 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:51:58.464612 1437114 cri.go:89] found id: ""
	I1209 05:51:58.464637 1437114 logs.go:282] 0 containers: []
	W1209 05:51:58.464646 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:51:58.464652 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:51:58.464707 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:51:58.496121 1437114 cri.go:89] found id: ""
	I1209 05:51:58.496148 1437114 logs.go:282] 0 containers: []
	W1209 05:51:58.496157 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:51:58.496163 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:51:58.496259 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:51:58.520390 1437114 cri.go:89] found id: ""
	I1209 05:51:58.520429 1437114 logs.go:282] 0 containers: []
	W1209 05:51:58.520439 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:51:58.520452 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:51:58.520531 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:51:58.546795 1437114 cri.go:89] found id: ""
	I1209 05:51:58.546828 1437114 logs.go:282] 0 containers: []
	W1209 05:51:58.546838 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:51:58.546847 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:51:58.546911 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:51:58.570252 1437114 cri.go:89] found id: ""
	I1209 05:51:58.570279 1437114 logs.go:282] 0 containers: []
	W1209 05:51:58.570289 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:51:58.570295 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:51:58.570359 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:51:58.594153 1437114 cri.go:89] found id: ""
	I1209 05:51:58.594178 1437114 logs.go:282] 0 containers: []
	W1209 05:51:58.594187 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:51:58.594195 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:51:58.594207 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:51:58.621218 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:51:58.621244 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:51:58.675840 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:51:58.675877 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:51:58.691699 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:51:58.691734 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:51:58.755150 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:51:58.747260    1968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:51:58.747839    1968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:51:58.749288    1968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:51:58.749743    1968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:51:58.751180    1968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:51:58.747260    1968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:51:58.747839    1968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:51:58.749288    1968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:51:58.749743    1968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:51:58.751180    1968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:51:58.755171 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:51:58.755185 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:52:01.281475 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:52:01.293255 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:52:01.293329 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:52:01.318701 1437114 cri.go:89] found id: ""
	I1209 05:52:01.318740 1437114 logs.go:282] 0 containers: []
	W1209 05:52:01.318749 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:52:01.318757 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:52:01.318827 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:52:01.343120 1437114 cri.go:89] found id: ""
	I1209 05:52:01.343145 1437114 logs.go:282] 0 containers: []
	W1209 05:52:01.343154 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:52:01.343170 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:52:01.343228 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:52:01.367699 1437114 cri.go:89] found id: ""
	I1209 05:52:01.367725 1437114 logs.go:282] 0 containers: []
	W1209 05:52:01.367733 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:52:01.367749 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:52:01.367823 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:52:01.394578 1437114 cri.go:89] found id: ""
	I1209 05:52:01.394603 1437114 logs.go:282] 0 containers: []
	W1209 05:52:01.394612 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:52:01.394618 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:52:01.394677 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:52:01.423264 1437114 cri.go:89] found id: ""
	I1209 05:52:01.423290 1437114 logs.go:282] 0 containers: []
	W1209 05:52:01.423299 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:52:01.423305 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:52:01.423367 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:52:01.460737 1437114 cri.go:89] found id: ""
	I1209 05:52:01.460764 1437114 logs.go:282] 0 containers: []
	W1209 05:52:01.460772 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:52:01.460778 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:52:01.460850 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:52:01.493246 1437114 cri.go:89] found id: ""
	I1209 05:52:01.493272 1437114 logs.go:282] 0 containers: []
	W1209 05:52:01.493281 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:52:01.493287 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:52:01.493364 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:52:01.517585 1437114 cri.go:89] found id: ""
	I1209 05:52:01.517612 1437114 logs.go:282] 0 containers: []
	W1209 05:52:01.517620 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:52:01.517630 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:52:01.517670 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:52:01.579907 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:52:01.571951    2063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:01.572467    2063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:01.574150    2063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:01.574485    2063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:01.575978    2063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:52:01.571951    2063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:01.572467    2063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:01.574150    2063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:01.574485    2063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:01.575978    2063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:52:01.579934 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:52:01.579951 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:52:01.605933 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:52:01.605968 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:52:01.633450 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:52:01.633476 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:52:01.690768 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:52:01.690809 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:52:03.378312 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1209 05:52:03.443761 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:52:03.443892 1437114 retry.go:31] will retry after 46.030372913s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:52:04.208154 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:52:04.218947 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:52:04.219023 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:52:04.250185 1437114 cri.go:89] found id: ""
	I1209 05:52:04.250210 1437114 logs.go:282] 0 containers: []
	W1209 05:52:04.250219 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:52:04.250226 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:52:04.250336 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:52:04.278437 1437114 cri.go:89] found id: ""
	I1209 05:52:04.278462 1437114 logs.go:282] 0 containers: []
	W1209 05:52:04.278471 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:52:04.278477 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:52:04.278540 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:52:04.306148 1437114 cri.go:89] found id: ""
	I1209 05:52:04.306212 1437114 logs.go:282] 0 containers: []
	W1209 05:52:04.306227 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:52:04.306235 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:52:04.306294 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:52:04.330968 1437114 cri.go:89] found id: ""
	I1209 05:52:04.330995 1437114 logs.go:282] 0 containers: []
	W1209 05:52:04.331003 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:52:04.331014 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:52:04.331074 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:52:04.361139 1437114 cri.go:89] found id: ""
	I1209 05:52:04.361213 1437114 logs.go:282] 0 containers: []
	W1209 05:52:04.361228 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:52:04.361235 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:52:04.361292 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:52:04.384663 1437114 cri.go:89] found id: ""
	I1209 05:52:04.384728 1437114 logs.go:282] 0 containers: []
	W1209 05:52:04.384744 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:52:04.384751 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:52:04.384819 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:52:04.409163 1437114 cri.go:89] found id: ""
	I1209 05:52:04.409188 1437114 logs.go:282] 0 containers: []
	W1209 05:52:04.409196 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:52:04.409202 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:52:04.409260 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:52:04.438875 1437114 cri.go:89] found id: ""
	I1209 05:52:04.438901 1437114 logs.go:282] 0 containers: []
	W1209 05:52:04.438911 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:52:04.438920 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:52:04.438930 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:52:04.504081 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:52:04.504118 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:52:04.520282 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:52:04.520314 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:52:04.582173 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:52:04.574497    2189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:04.575080    2189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:04.576516    2189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:04.576898    2189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:04.578287    2189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:52:04.574497    2189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:04.575080    2189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:04.576516    2189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:04.576898    2189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:04.578287    2189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:52:04.582197 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:52:04.582209 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:52:04.607423 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:52:04.607456 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:52:07.139347 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:52:07.149801 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:52:07.149872 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:52:07.174952 1437114 cri.go:89] found id: ""
	I1209 05:52:07.174980 1437114 logs.go:282] 0 containers: []
	W1209 05:52:07.174988 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:52:07.174995 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:52:07.175054 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:52:07.202325 1437114 cri.go:89] found id: ""
	I1209 05:52:07.202387 1437114 logs.go:282] 0 containers: []
	W1209 05:52:07.202418 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:52:07.202437 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:52:07.202533 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:52:07.232008 1437114 cri.go:89] found id: ""
	I1209 05:52:07.232092 1437114 logs.go:282] 0 containers: []
	W1209 05:52:07.232147 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:52:07.232170 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:52:07.232265 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:52:07.259048 1437114 cri.go:89] found id: ""
	I1209 05:52:07.259075 1437114 logs.go:282] 0 containers: []
	W1209 05:52:07.259084 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:52:07.259091 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:52:07.259147 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:52:07.283135 1437114 cri.go:89] found id: ""
	I1209 05:52:07.283161 1437114 logs.go:282] 0 containers: []
	W1209 05:52:07.283169 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:52:07.283175 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:52:07.283285 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:52:07.307259 1437114 cri.go:89] found id: ""
	I1209 05:52:07.307285 1437114 logs.go:282] 0 containers: []
	W1209 05:52:07.307294 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:52:07.307300 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:52:07.307357 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:52:07.331534 1437114 cri.go:89] found id: ""
	I1209 05:52:07.331604 1437114 logs.go:282] 0 containers: []
	W1209 05:52:07.331627 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:52:07.331645 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:52:07.331742 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:52:07.358525 1437114 cri.go:89] found id: ""
	I1209 05:52:07.358548 1437114 logs.go:282] 0 containers: []
	W1209 05:52:07.358557 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:52:07.358565 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:52:07.358577 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:52:07.424932 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:52:07.417064    2295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:07.417623    2295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:07.419222    2295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:07.419698    2295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:07.421122    2295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:52:07.417064    2295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:07.417623    2295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:07.419222    2295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:07.419698    2295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:07.421122    2295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:52:07.425003 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:52:07.425028 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:52:07.452549 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:52:07.452633 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:52:07.488600 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:52:07.488675 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:52:07.547568 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:52:07.547604 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:52:10.063961 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:52:10.075421 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:52:10.075510 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:52:10.106279 1437114 cri.go:89] found id: ""
	I1209 05:52:10.106307 1437114 logs.go:282] 0 containers: []
	W1209 05:52:10.106317 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:52:10.106323 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:52:10.106395 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:52:10.140825 1437114 cri.go:89] found id: ""
	I1209 05:52:10.140865 1437114 logs.go:282] 0 containers: []
	W1209 05:52:10.140874 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:52:10.140881 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:52:10.140961 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:52:10.166337 1437114 cri.go:89] found id: ""
	I1209 05:52:10.166364 1437114 logs.go:282] 0 containers: []
	W1209 05:52:10.166373 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:52:10.166380 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:52:10.166460 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:52:10.202390 1437114 cri.go:89] found id: ""
	I1209 05:52:10.202417 1437114 logs.go:282] 0 containers: []
	W1209 05:52:10.202426 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:52:10.202432 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:52:10.202541 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:52:10.230690 1437114 cri.go:89] found id: ""
	I1209 05:52:10.230716 1437114 logs.go:282] 0 containers: []
	W1209 05:52:10.230726 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:52:10.230733 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:52:10.230847 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:52:10.257345 1437114 cri.go:89] found id: ""
	I1209 05:52:10.257371 1437114 logs.go:282] 0 containers: []
	W1209 05:52:10.257380 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:52:10.257386 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:52:10.257452 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:52:10.282028 1437114 cri.go:89] found id: ""
	I1209 05:52:10.282053 1437114 logs.go:282] 0 containers: []
	W1209 05:52:10.282062 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:52:10.282069 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:52:10.282136 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:52:10.306484 1437114 cri.go:89] found id: ""
	I1209 05:52:10.306509 1437114 logs.go:282] 0 containers: []
	W1209 05:52:10.306519 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:52:10.306538 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:52:10.306550 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:52:10.334032 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:52:10.334059 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:52:10.396200 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:52:10.396241 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:52:10.412481 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:52:10.412513 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:52:10.512214 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:52:10.503459    2422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:10.504106    2422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:10.505795    2422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:10.506184    2422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:10.507800    2422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:52:10.503459    2422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:10.504106    2422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:10.505795    2422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:10.506184    2422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:10.507800    2422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:52:10.512237 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:52:10.512250 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:52:13.038285 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:52:13.048783 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:52:13.048856 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:52:13.073147 1437114 cri.go:89] found id: ""
	I1209 05:52:13.073174 1437114 logs.go:282] 0 containers: []
	W1209 05:52:13.073182 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:52:13.073189 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:52:13.073264 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:52:13.096887 1437114 cri.go:89] found id: ""
	I1209 05:52:13.096911 1437114 logs.go:282] 0 containers: []
	W1209 05:52:13.096919 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:52:13.096926 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:52:13.096983 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:52:13.120441 1437114 cri.go:89] found id: ""
	I1209 05:52:13.120466 1437114 logs.go:282] 0 containers: []
	W1209 05:52:13.120475 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:52:13.120482 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:52:13.120540 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:52:13.144403 1437114 cri.go:89] found id: ""
	I1209 05:52:13.144478 1437114 logs.go:282] 0 containers: []
	W1209 05:52:13.144494 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:52:13.144504 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:52:13.144576 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:52:13.168584 1437114 cri.go:89] found id: ""
	I1209 05:52:13.168610 1437114 logs.go:282] 0 containers: []
	W1209 05:52:13.168619 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:52:13.168626 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:52:13.168683 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:52:13.204797 1437114 cri.go:89] found id: ""
	I1209 05:52:13.204824 1437114 logs.go:282] 0 containers: []
	W1209 05:52:13.204833 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:52:13.204840 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:52:13.204899 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:52:13.231178 1437114 cri.go:89] found id: ""
	I1209 05:52:13.231205 1437114 logs.go:282] 0 containers: []
	W1209 05:52:13.231214 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:52:13.231220 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:52:13.231278 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:52:13.260307 1437114 cri.go:89] found id: ""
	I1209 05:52:13.260331 1437114 logs.go:282] 0 containers: []
	W1209 05:52:13.260341 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:52:13.260350 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:52:13.260361 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:52:13.286145 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:52:13.286182 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:52:13.315119 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:52:13.315147 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:52:13.369862 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:52:13.369894 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:52:13.385795 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:52:13.385822 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:52:13.451305 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:52:13.443201    2536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:13.444044    2536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:13.445720    2536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:13.446006    2536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:13.447466    2536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:52:13.443201    2536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:13.444044    2536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:13.445720    2536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:13.446006    2536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:13.447466    2536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:52:15.952193 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:52:15.962440 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:52:15.962511 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:52:15.990421 1437114 cri.go:89] found id: ""
	I1209 05:52:15.990444 1437114 logs.go:282] 0 containers: []
	W1209 05:52:15.990452 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:52:15.990459 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:52:15.990527 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:52:16.025731 1437114 cri.go:89] found id: ""
	I1209 05:52:16.025759 1437114 logs.go:282] 0 containers: []
	W1209 05:52:16.025768 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:52:16.025775 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:52:16.025850 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:52:16.051150 1437114 cri.go:89] found id: ""
	I1209 05:52:16.051184 1437114 logs.go:282] 0 containers: []
	W1209 05:52:16.051193 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:52:16.051199 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:52:16.051269 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:52:16.080315 1437114 cri.go:89] found id: ""
	I1209 05:52:16.080343 1437114 logs.go:282] 0 containers: []
	W1209 05:52:16.080352 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:52:16.080358 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:52:16.080421 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:52:16.106254 1437114 cri.go:89] found id: ""
	I1209 05:52:16.106329 1437114 logs.go:282] 0 containers: []
	W1209 05:52:16.106344 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:52:16.106351 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:52:16.106419 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:52:16.130691 1437114 cri.go:89] found id: ""
	I1209 05:52:16.130717 1437114 logs.go:282] 0 containers: []
	W1209 05:52:16.130726 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:52:16.130732 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:52:16.130788 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:52:16.156232 1437114 cri.go:89] found id: ""
	I1209 05:52:16.156257 1437114 logs.go:282] 0 containers: []
	W1209 05:52:16.156266 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:52:16.156272 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:52:16.156333 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:52:16.186070 1437114 cri.go:89] found id: ""
	I1209 05:52:16.186091 1437114 logs.go:282] 0 containers: []
	W1209 05:52:16.186100 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:52:16.186109 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:52:16.186121 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:52:16.203551 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:52:16.203579 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:52:16.280037 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:52:16.272128    2631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:16.272800    2631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:16.274272    2631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:16.274686    2631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:16.276185    2631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:52:16.272128    2631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:16.272800    2631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:16.274272    2631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:16.274686    2631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:16.276185    2631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:52:16.280087 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:52:16.280102 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:52:16.304445 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:52:16.304479 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:52:16.333574 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:52:16.333599 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:52:18.890807 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:52:18.901129 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:52:18.901207 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:52:18.925553 1437114 cri.go:89] found id: ""
	I1209 05:52:18.925576 1437114 logs.go:282] 0 containers: []
	W1209 05:52:18.925584 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:52:18.925590 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:52:18.925648 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:52:18.951104 1437114 cri.go:89] found id: ""
	I1209 05:52:18.951180 1437114 logs.go:282] 0 containers: []
	W1209 05:52:18.951203 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:52:18.951221 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:52:18.951309 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:52:18.975343 1437114 cri.go:89] found id: ""
	I1209 05:52:18.975407 1437114 logs.go:282] 0 containers: []
	W1209 05:52:18.975432 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:52:18.975450 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:52:18.975535 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:52:18.999522 1437114 cri.go:89] found id: ""
	I1209 05:52:18.999596 1437114 logs.go:282] 0 containers: []
	W1209 05:52:18.999619 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:52:18.999637 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:52:18.999722 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:52:19.025106 1437114 cri.go:89] found id: ""
	I1209 05:52:19.025181 1437114 logs.go:282] 0 containers: []
	W1209 05:52:19.025203 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:52:19.025221 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:52:19.025307 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:52:19.047867 1437114 cri.go:89] found id: ""
	I1209 05:52:19.047944 1437114 logs.go:282] 0 containers: []
	W1209 05:52:19.047966 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:52:19.048006 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:52:19.048106 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:52:19.071487 1437114 cri.go:89] found id: ""
	I1209 05:52:19.071511 1437114 logs.go:282] 0 containers: []
	W1209 05:52:19.071519 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:52:19.071526 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:52:19.071585 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:52:19.096506 1437114 cri.go:89] found id: ""
	I1209 05:52:19.096531 1437114 logs.go:282] 0 containers: []
	W1209 05:52:19.096540 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:52:19.096549 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:52:19.096595 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:52:19.111961 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:52:19.112001 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:52:19.184448 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:52:19.173564    2740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:19.174163    2740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:19.175662    2740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:19.176275    2740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:19.178917    2740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:52:19.173564    2740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:19.174163    2740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:19.175662    2740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:19.176275    2740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:19.178917    2740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:52:19.184473 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:52:19.184487 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:52:19.213109 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:52:19.213148 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:52:19.242001 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:52:19.242036 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:52:21.800441 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:52:21.810634 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:52:21.810706 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:52:21.835147 1437114 cri.go:89] found id: ""
	I1209 05:52:21.835171 1437114 logs.go:282] 0 containers: []
	W1209 05:52:21.835180 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:52:21.835186 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:52:21.835244 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:52:21.863735 1437114 cri.go:89] found id: ""
	I1209 05:52:21.863760 1437114 logs.go:282] 0 containers: []
	W1209 05:52:21.863769 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:52:21.863775 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:52:21.863833 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:52:21.887643 1437114 cri.go:89] found id: ""
	I1209 05:52:21.887667 1437114 logs.go:282] 0 containers: []
	W1209 05:52:21.887676 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:52:21.887682 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:52:21.887738 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:52:21.912358 1437114 cri.go:89] found id: ""
	I1209 05:52:21.912384 1437114 logs.go:282] 0 containers: []
	W1209 05:52:21.912392 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:52:21.912399 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:52:21.912458 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:52:21.941394 1437114 cri.go:89] found id: ""
	I1209 05:52:21.941420 1437114 logs.go:282] 0 containers: []
	W1209 05:52:21.941429 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:52:21.941435 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:52:21.941521 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:52:21.945768 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 05:52:21.973669 1437114 cri.go:89] found id: ""
	I1209 05:52:21.973703 1437114 logs.go:282] 0 containers: []
	W1209 05:52:21.973712 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:52:21.973734 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:52:21.973814 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	W1209 05:52:22.028092 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:52:22.028115 1437114 cri.go:89] found id: ""
	I1209 05:52:22.028247 1437114 logs.go:282] 0 containers: []
	W1209 05:52:22.028256 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:52:22.028268 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	W1209 05:52:22.028296 1437114 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1209 05:52:22.028335 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:52:22.054827 1437114 cri.go:89] found id: ""
	I1209 05:52:22.054854 1437114 logs.go:282] 0 containers: []
	W1209 05:52:22.054862 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:52:22.054871 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:52:22.054883 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:52:22.081941 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:52:22.081985 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:52:22.109801 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:52:22.109829 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:52:22.167418 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:52:22.167455 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:52:22.186947 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:52:22.187039 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:52:22.274107 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:52:22.265076    2876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:22.265712    2876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:22.267349    2876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:22.267990    2876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:22.269553    2876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:52:22.265076    2876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:22.265712    2876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:22.267349    2876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:22.267990    2876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:22.269553    2876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:52:24.774371 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:52:24.785291 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:52:24.785383 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:52:24.810496 1437114 cri.go:89] found id: ""
	I1209 05:52:24.810521 1437114 logs.go:282] 0 containers: []
	W1209 05:52:24.810530 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:52:24.810537 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:52:24.810641 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:52:24.840246 1437114 cri.go:89] found id: ""
	I1209 05:52:24.840283 1437114 logs.go:282] 0 containers: []
	W1209 05:52:24.840292 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:52:24.840298 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:52:24.840383 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:52:24.866227 1437114 cri.go:89] found id: ""
	I1209 05:52:24.866252 1437114 logs.go:282] 0 containers: []
	W1209 05:52:24.866267 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:52:24.866274 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:52:24.866334 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:52:24.894487 1437114 cri.go:89] found id: ""
	I1209 05:52:24.894512 1437114 logs.go:282] 0 containers: []
	W1209 05:52:24.894521 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:52:24.894528 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:52:24.894592 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:52:24.919081 1437114 cri.go:89] found id: ""
	I1209 05:52:24.919106 1437114 logs.go:282] 0 containers: []
	W1209 05:52:24.919115 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:52:24.919122 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:52:24.919182 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:52:24.942639 1437114 cri.go:89] found id: ""
	I1209 05:52:24.942664 1437114 logs.go:282] 0 containers: []
	W1209 05:52:24.942673 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:52:24.942679 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:52:24.942736 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:52:24.966811 1437114 cri.go:89] found id: ""
	I1209 05:52:24.966835 1437114 logs.go:282] 0 containers: []
	W1209 05:52:24.966844 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:52:24.966849 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:52:24.966906 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:52:24.990491 1437114 cri.go:89] found id: ""
	I1209 05:52:24.990515 1437114 logs.go:282] 0 containers: []
	W1209 05:52:24.990524 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:52:24.990533 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:52:24.990544 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:52:25.049211 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:52:25.049244 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:52:25.065441 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:52:25.065469 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:52:25.128713 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:52:25.120700    2971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:25.121283    2971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:25.122776    2971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:25.123296    2971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:25.124752    2971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:52:25.120700    2971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:25.121283    2971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:25.122776    2971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:25.123296    2971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:25.124752    2971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:52:25.128735 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:52:25.128750 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:52:25.154485 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:52:25.154518 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:52:27.686448 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:52:27.697271 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:52:27.697388 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:52:27.723850 1437114 cri.go:89] found id: ""
	I1209 05:52:27.723930 1437114 logs.go:282] 0 containers: []
	W1209 05:52:27.723953 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:52:27.723970 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:52:27.724082 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:52:27.749864 1437114 cri.go:89] found id: ""
	I1209 05:52:27.749889 1437114 logs.go:282] 0 containers: []
	W1209 05:52:27.749897 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:52:27.749904 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:52:27.749989 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:52:27.773124 1437114 cri.go:89] found id: ""
	I1209 05:52:27.773151 1437114 logs.go:282] 0 containers: []
	W1209 05:52:27.773167 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:52:27.773174 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:52:27.773238 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:52:27.802090 1437114 cri.go:89] found id: ""
	I1209 05:52:27.802118 1437114 logs.go:282] 0 containers: []
	W1209 05:52:27.802128 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:52:27.802134 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:52:27.802193 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:52:27.827324 1437114 cri.go:89] found id: ""
	I1209 05:52:27.827349 1437114 logs.go:282] 0 containers: []
	W1209 05:52:27.827361 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:52:27.827367 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:52:27.827425 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:52:27.855877 1437114 cri.go:89] found id: ""
	I1209 05:52:27.855905 1437114 logs.go:282] 0 containers: []
	W1209 05:52:27.855914 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:52:27.855920 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:52:27.855980 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:52:27.880242 1437114 cri.go:89] found id: ""
	I1209 05:52:27.880322 1437114 logs.go:282] 0 containers: []
	W1209 05:52:27.880346 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:52:27.880365 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:52:27.880457 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:52:27.903986 1437114 cri.go:89] found id: ""
	I1209 05:52:27.904032 1437114 logs.go:282] 0 containers: []
	W1209 05:52:27.904041 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:52:27.904079 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:52:27.904100 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:52:27.937811 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:52:27.937838 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:52:27.993533 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:52:27.993570 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:52:28.010780 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:52:28.010818 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:52:28.075391 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:52:28.066786    3096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:28.067667    3096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:28.069423    3096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:28.069776    3096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:28.071145    3096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:52:28.066786    3096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:28.067667    3096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:28.069423    3096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:28.069776    3096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:28.071145    3096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:52:28.075424 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:52:28.075454 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:52:30.602097 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:52:30.612434 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:52:30.612508 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:52:30.638153 1437114 cri.go:89] found id: ""
	I1209 05:52:30.638183 1437114 logs.go:282] 0 containers: []
	W1209 05:52:30.638191 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:52:30.638197 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:52:30.638280 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:52:30.664120 1437114 cri.go:89] found id: ""
	I1209 05:52:30.664206 1437114 logs.go:282] 0 containers: []
	W1209 05:52:30.664221 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:52:30.664229 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:52:30.664291 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:52:30.695098 1437114 cri.go:89] found id: ""
	I1209 05:52:30.695124 1437114 logs.go:282] 0 containers: []
	W1209 05:52:30.695132 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:52:30.695138 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:52:30.695196 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:52:30.728679 1437114 cri.go:89] found id: ""
	I1209 05:52:30.728703 1437114 logs.go:282] 0 containers: []
	W1209 05:52:30.728711 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:52:30.728718 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:52:30.728777 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:52:30.757085 1437114 cri.go:89] found id: ""
	I1209 05:52:30.757108 1437114 logs.go:282] 0 containers: []
	W1209 05:52:30.757116 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:52:30.757122 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:52:30.757190 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:52:30.781813 1437114 cri.go:89] found id: ""
	I1209 05:52:30.781838 1437114 logs.go:282] 0 containers: []
	W1209 05:52:30.781847 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:52:30.781853 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:52:30.781931 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:52:30.805893 1437114 cri.go:89] found id: ""
	I1209 05:52:30.805958 1437114 logs.go:282] 0 containers: []
	W1209 05:52:30.805972 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:52:30.805980 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:52:30.806045 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:52:30.838632 1437114 cri.go:89] found id: ""
	I1209 05:52:30.838657 1437114 logs.go:282] 0 containers: []
	W1209 05:52:30.838666 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:52:30.838675 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:52:30.838686 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:52:30.853978 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:52:30.854004 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:52:30.918110 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:52:30.910818    3197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:30.911400    3197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:30.912432    3197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:30.912927    3197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:30.914407    3197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:52:30.910818    3197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:30.911400    3197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:30.912432    3197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:30.912927    3197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:30.914407    3197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:52:30.918132 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:52:30.918144 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:52:30.943105 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:52:30.943142 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:52:30.969706 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:52:30.969735 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:52:33.525286 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:52:33.535730 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:52:33.535803 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:52:33.559344 1437114 cri.go:89] found id: ""
	I1209 05:52:33.559369 1437114 logs.go:282] 0 containers: []
	W1209 05:52:33.559378 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:52:33.559384 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:52:33.559441 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:52:33.588185 1437114 cri.go:89] found id: ""
	I1209 05:52:33.588254 1437114 logs.go:282] 0 containers: []
	W1209 05:52:33.588278 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:52:33.588292 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:52:33.588366 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:52:33.613255 1437114 cri.go:89] found id: ""
	I1209 05:52:33.613279 1437114 logs.go:282] 0 containers: []
	W1209 05:52:33.613288 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:52:33.613295 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:52:33.613382 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:52:33.636919 1437114 cri.go:89] found id: ""
	I1209 05:52:33.636953 1437114 logs.go:282] 0 containers: []
	W1209 05:52:33.636961 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:52:33.636968 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:52:33.637035 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:52:33.666309 1437114 cri.go:89] found id: ""
	I1209 05:52:33.666342 1437114 logs.go:282] 0 containers: []
	W1209 05:52:33.666351 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:52:33.666358 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:52:33.666424 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:52:33.698208 1437114 cri.go:89] found id: ""
	I1209 05:52:33.698283 1437114 logs.go:282] 0 containers: []
	W1209 05:52:33.698305 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:52:33.698324 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:52:33.698413 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:52:33.730383 1437114 cri.go:89] found id: ""
	I1209 05:52:33.730456 1437114 logs.go:282] 0 containers: []
	W1209 05:52:33.730479 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:52:33.730499 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:52:33.730585 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:52:33.759854 1437114 cri.go:89] found id: ""
	I1209 05:52:33.759930 1437114 logs.go:282] 0 containers: []
	W1209 05:52:33.759952 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:52:33.759972 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:52:33.760007 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:52:33.822572 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:52:33.815081    3306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:33.815468    3306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:33.816948    3306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:33.817250    3306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:33.818729    3306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:52:33.815081    3306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:33.815468    3306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:33.816948    3306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:33.817250    3306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:33.818729    3306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:52:33.822593 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:52:33.822606 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:52:33.848713 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:52:33.848751 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:52:33.875169 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:52:33.875202 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:52:33.929863 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:52:33.929899 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:52:36.446655 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:52:36.457494 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:52:36.457564 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:52:36.489953 1437114 cri.go:89] found id: ""
	I1209 05:52:36.490015 1437114 logs.go:282] 0 containers: []
	W1209 05:52:36.490045 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:52:36.490069 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:52:36.490171 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:52:36.518208 1437114 cri.go:89] found id: ""
	I1209 05:52:36.518232 1437114 logs.go:282] 0 containers: []
	W1209 05:52:36.518240 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:52:36.518246 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:52:36.518303 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:52:36.546757 1437114 cri.go:89] found id: ""
	I1209 05:52:36.546830 1437114 logs.go:282] 0 containers: []
	W1209 05:52:36.546852 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:52:36.546870 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:52:36.546958 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:52:36.573478 1437114 cri.go:89] found id: ""
	I1209 05:52:36.573504 1437114 logs.go:282] 0 containers: []
	W1209 05:52:36.573512 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:52:36.573518 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:52:36.573573 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:52:36.597359 1437114 cri.go:89] found id: ""
	I1209 05:52:36.597384 1437114 logs.go:282] 0 containers: []
	W1209 05:52:36.597392 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:52:36.597399 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:52:36.597456 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:52:36.626723 1437114 cri.go:89] found id: ""
	I1209 05:52:36.626750 1437114 logs.go:282] 0 containers: []
	W1209 05:52:36.626758 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:52:36.626765 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:52:36.626821 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:52:36.651878 1437114 cri.go:89] found id: ""
	I1209 05:52:36.651904 1437114 logs.go:282] 0 containers: []
	W1209 05:52:36.651913 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:52:36.651920 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:52:36.651983 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:52:36.677687 1437114 cri.go:89] found id: ""
	I1209 05:52:36.677763 1437114 logs.go:282] 0 containers: []
	W1209 05:52:36.677786 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:52:36.677806 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:52:36.677844 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:52:36.762388 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:52:36.754574    3419 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:36.755265    3419 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:36.756812    3419 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:36.757117    3419 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:36.758563    3419 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:52:36.754574    3419 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:36.755265    3419 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:36.756812    3419 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:36.757117    3419 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:36.758563    3419 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:52:36.762408 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:52:36.762421 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:52:36.787210 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:52:36.787245 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:52:36.813523 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:52:36.813549 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:52:36.871098 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:52:36.871134 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:52:39.145660 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1209 05:52:39.203856 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 05:52:39.203957 1437114 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1209 05:52:39.388175 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:52:39.398492 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:52:39.398583 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:52:39.425881 1437114 cri.go:89] found id: ""
	I1209 05:52:39.425914 1437114 logs.go:282] 0 containers: []
	W1209 05:52:39.425924 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:52:39.425930 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:52:39.425998 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:52:39.450356 1437114 cri.go:89] found id: ""
	I1209 05:52:39.450390 1437114 logs.go:282] 0 containers: []
	W1209 05:52:39.450399 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:52:39.450405 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:52:39.450472 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:52:39.482441 1437114 cri.go:89] found id: ""
	I1209 05:52:39.482475 1437114 logs.go:282] 0 containers: []
	W1209 05:52:39.482483 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:52:39.482490 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:52:39.482554 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:52:39.512577 1437114 cri.go:89] found id: ""
	I1209 05:52:39.512602 1437114 logs.go:282] 0 containers: []
	W1209 05:52:39.512611 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:52:39.512617 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:52:39.512674 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:52:39.537514 1437114 cri.go:89] found id: ""
	I1209 05:52:39.537539 1437114 logs.go:282] 0 containers: []
	W1209 05:52:39.537547 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:52:39.537559 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:52:39.537620 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:52:39.561319 1437114 cri.go:89] found id: ""
	I1209 05:52:39.561352 1437114 logs.go:282] 0 containers: []
	W1209 05:52:39.561360 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:52:39.561366 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:52:39.561442 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:52:39.589300 1437114 cri.go:89] found id: ""
	I1209 05:52:39.589324 1437114 logs.go:282] 0 containers: []
	W1209 05:52:39.589333 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:52:39.589339 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:52:39.589398 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:52:39.620288 1437114 cri.go:89] found id: ""
	I1209 05:52:39.620312 1437114 logs.go:282] 0 containers: []
	W1209 05:52:39.620321 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:52:39.620339 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:52:39.620351 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:52:39.678215 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:52:39.678293 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:52:39.697337 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:52:39.697364 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:52:39.767115 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:52:39.758981    3543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:39.759384    3543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:39.761296    3543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:39.761699    3543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:39.763232    3543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:52:39.758981    3543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:39.759384    3543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:39.761296    3543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:39.761699    3543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:39.763232    3543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:52:39.767135 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:52:39.767147 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:52:39.791949 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:52:39.791985 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:52:42.324195 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:52:42.339508 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:52:42.339591 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:52:42.370155 1437114 cri.go:89] found id: ""
	I1209 05:52:42.370181 1437114 logs.go:282] 0 containers: []
	W1209 05:52:42.370192 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:52:42.370199 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:52:42.370268 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:52:42.395020 1437114 cri.go:89] found id: ""
	I1209 05:52:42.395054 1437114 logs.go:282] 0 containers: []
	W1209 05:52:42.395063 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:52:42.395069 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:52:42.395136 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:52:42.423571 1437114 cri.go:89] found id: ""
	I1209 05:52:42.423604 1437114 logs.go:282] 0 containers: []
	W1209 05:52:42.423612 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:52:42.423618 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:52:42.423684 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:52:42.449744 1437114 cri.go:89] found id: ""
	I1209 05:52:42.449821 1437114 logs.go:282] 0 containers: []
	W1209 05:52:42.449846 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:52:42.449865 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:52:42.449951 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:52:42.476838 1437114 cri.go:89] found id: ""
	I1209 05:52:42.476864 1437114 logs.go:282] 0 containers: []
	W1209 05:52:42.476872 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:52:42.476879 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:52:42.476957 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:52:42.505251 1437114 cri.go:89] found id: ""
	I1209 05:52:42.505278 1437114 logs.go:282] 0 containers: []
	W1209 05:52:42.505287 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:52:42.505294 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:52:42.505372 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:52:42.529646 1437114 cri.go:89] found id: ""
	I1209 05:52:42.529712 1437114 logs.go:282] 0 containers: []
	W1209 05:52:42.529728 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:52:42.529741 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:52:42.529803 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:52:42.553792 1437114 cri.go:89] found id: ""
	I1209 05:52:42.553818 1437114 logs.go:282] 0 containers: []
	W1209 05:52:42.553827 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:52:42.553836 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:52:42.553865 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:52:42.610712 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:52:42.610750 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:52:42.626470 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:52:42.626498 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:52:42.691633 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:52:42.681192    3655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:42.683916    3655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:42.685453    3655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:42.685744    3655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:42.687188    3655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:52:42.681192    3655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:42.683916    3655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:42.685453    3655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:42.685744    3655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:42.687188    3655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:52:42.691658 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:52:42.691672 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:52:42.721023 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:52:42.721056 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:52:45.257072 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:52:45.279876 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:52:45.279970 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:52:45.310797 1437114 cri.go:89] found id: ""
	I1209 05:52:45.310822 1437114 logs.go:282] 0 containers: []
	W1209 05:52:45.310831 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:52:45.310837 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:52:45.310915 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:52:45.339967 1437114 cri.go:89] found id: ""
	I1209 05:52:45.339990 1437114 logs.go:282] 0 containers: []
	W1209 05:52:45.339999 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:52:45.340004 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:52:45.340083 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:52:45.368323 1437114 cri.go:89] found id: ""
	I1209 05:52:45.368351 1437114 logs.go:282] 0 containers: []
	W1209 05:52:45.368360 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:52:45.368368 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:52:45.368427 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:52:45.393892 1437114 cri.go:89] found id: ""
	I1209 05:52:45.393918 1437114 logs.go:282] 0 containers: []
	W1209 05:52:45.393926 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:52:45.393932 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:52:45.393995 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:52:45.418992 1437114 cri.go:89] found id: ""
	I1209 05:52:45.419025 1437114 logs.go:282] 0 containers: []
	W1209 05:52:45.419035 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:52:45.419041 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:52:45.419107 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:52:45.461356 1437114 cri.go:89] found id: ""
	I1209 05:52:45.461392 1437114 logs.go:282] 0 containers: []
	W1209 05:52:45.461401 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:52:45.461407 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:52:45.461481 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:52:45.493718 1437114 cri.go:89] found id: ""
	I1209 05:52:45.493753 1437114 logs.go:282] 0 containers: []
	W1209 05:52:45.493762 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:52:45.493768 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:52:45.493836 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:52:45.517850 1437114 cri.go:89] found id: ""
	I1209 05:52:45.517876 1437114 logs.go:282] 0 containers: []
	W1209 05:52:45.517898 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:52:45.517907 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:52:45.517922 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:52:45.576699 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:52:45.576736 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:52:45.592339 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:52:45.592368 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:52:45.660368 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:52:45.651938    3766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:45.652711    3766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:45.654414    3766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:45.654934    3766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:45.656559    3766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:52:45.651938    3766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:45.652711    3766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:45.654414    3766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:45.654934    3766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:45.656559    3766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:52:45.660391 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:52:45.660404 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:52:45.687142 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:52:45.687222 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:52:48.227261 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:52:48.237593 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:52:48.237680 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:52:48.260468 1437114 cri.go:89] found id: ""
	I1209 05:52:48.260493 1437114 logs.go:282] 0 containers: []
	W1209 05:52:48.260502 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:52:48.260509 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:52:48.260570 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:52:48.289034 1437114 cri.go:89] found id: ""
	I1209 05:52:48.289059 1437114 logs.go:282] 0 containers: []
	W1209 05:52:48.289068 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:52:48.289074 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:52:48.289150 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:52:48.316323 1437114 cri.go:89] found id: ""
	I1209 05:52:48.316349 1437114 logs.go:282] 0 containers: []
	W1209 05:52:48.316358 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:52:48.316364 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:52:48.316434 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:52:48.342218 1437114 cri.go:89] found id: ""
	I1209 05:52:48.342240 1437114 logs.go:282] 0 containers: []
	W1209 05:52:48.342249 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:52:48.342255 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:52:48.342308 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:52:48.371363 1437114 cri.go:89] found id: ""
	I1209 05:52:48.371390 1437114 logs.go:282] 0 containers: []
	W1209 05:52:48.371399 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:52:48.371406 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:52:48.371466 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:52:48.395178 1437114 cri.go:89] found id: ""
	I1209 05:52:48.395204 1437114 logs.go:282] 0 containers: []
	W1209 05:52:48.395212 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:52:48.395218 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:52:48.395274 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:52:48.419670 1437114 cri.go:89] found id: ""
	I1209 05:52:48.419709 1437114 logs.go:282] 0 containers: []
	W1209 05:52:48.419718 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:52:48.419740 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:52:48.419825 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:52:48.461924 1437114 cri.go:89] found id: ""
	I1209 05:52:48.461946 1437114 logs.go:282] 0 containers: []
	W1209 05:52:48.461954 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:52:48.461963 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:52:48.461974 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:52:48.528889 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:52:48.528926 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:52:48.544946 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:52:48.544976 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:52:48.610447 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:52:48.602428    3879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:48.603193    3879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:48.604673    3879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:48.605169    3879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:48.606641    3879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:52:48.602428    3879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:48.603193    3879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:48.604673    3879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:48.605169    3879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:48.606641    3879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:52:48.610466 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:52:48.610478 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:52:48.636193 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:52:48.636232 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:52:49.474531 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1209 05:52:49.539382 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 05:52:49.539481 1437114 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1209 05:52:49.543501 1437114 out.go:179] * Enabled addons: 
	I1209 05:52:49.546285 1437114 addons.go:530] duration metric: took 1m54.169473068s for enable addons: enabled=[]
	I1209 05:52:51.163525 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:52:51.174339 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:52:51.174465 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:52:51.198800 1437114 cri.go:89] found id: ""
	I1209 05:52:51.198828 1437114 logs.go:282] 0 containers: []
	W1209 05:52:51.198837 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:52:51.198843 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:52:51.198901 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:52:51.224524 1437114 cri.go:89] found id: ""
	I1209 05:52:51.224552 1437114 logs.go:282] 0 containers: []
	W1209 05:52:51.224561 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:52:51.224568 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:52:51.224626 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:52:51.249032 1437114 cri.go:89] found id: ""
	I1209 05:52:51.249099 1437114 logs.go:282] 0 containers: []
	W1209 05:52:51.249122 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:52:51.249136 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:52:51.249210 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:52:51.272901 1437114 cri.go:89] found id: ""
	I1209 05:52:51.272929 1437114 logs.go:282] 0 containers: []
	W1209 05:52:51.272937 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:52:51.272950 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:52:51.273011 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:52:51.296909 1437114 cri.go:89] found id: ""
	I1209 05:52:51.296935 1437114 logs.go:282] 0 containers: []
	W1209 05:52:51.296943 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:52:51.296949 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:52:51.297007 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:52:51.325419 1437114 cri.go:89] found id: ""
	I1209 05:52:51.325499 1437114 logs.go:282] 0 containers: []
	W1209 05:52:51.325522 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:52:51.325537 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:52:51.325609 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:52:51.350449 1437114 cri.go:89] found id: ""
	I1209 05:52:51.350475 1437114 logs.go:282] 0 containers: []
	W1209 05:52:51.350484 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:52:51.350490 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:52:51.350571 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:52:51.378459 1437114 cri.go:89] found id: ""
	I1209 05:52:51.378482 1437114 logs.go:282] 0 containers: []
	W1209 05:52:51.378490 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:52:51.378501 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:52:51.378512 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:52:51.439032 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:52:51.439075 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:52:51.457325 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:52:51.457355 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:52:51.525486 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:52:51.517693    3998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:51.518243    3998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:51.519766    3998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:51.520306    3998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:51.521762    3998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:52:51.517693    3998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:51.518243    3998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:51.519766    3998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:51.520306    3998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:51.521762    3998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:52:51.525549 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:52:51.525570 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:52:51.551425 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:52:51.551463 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:52:54.078624 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:52:54.089324 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:52:54.089395 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:52:54.117819 1437114 cri.go:89] found id: ""
	I1209 05:52:54.117840 1437114 logs.go:282] 0 containers: []
	W1209 05:52:54.117856 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:52:54.117863 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:52:54.117923 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:52:54.143006 1437114 cri.go:89] found id: ""
	I1209 05:52:54.143083 1437114 logs.go:282] 0 containers: []
	W1209 05:52:54.143105 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:52:54.143125 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:52:54.143200 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:52:54.168655 1437114 cri.go:89] found id: ""
	I1209 05:52:54.168715 1437114 logs.go:282] 0 containers: []
	W1209 05:52:54.168742 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:52:54.168758 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:52:54.168847 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:52:54.193433 1437114 cri.go:89] found id: ""
	I1209 05:52:54.193459 1437114 logs.go:282] 0 containers: []
	W1209 05:52:54.193467 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:52:54.193474 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:52:54.193558 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:52:54.216587 1437114 cri.go:89] found id: ""
	I1209 05:52:54.216663 1437114 logs.go:282] 0 containers: []
	W1209 05:52:54.216686 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:52:54.216700 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:52:54.216775 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:52:54.240686 1437114 cri.go:89] found id: ""
	I1209 05:52:54.240723 1437114 logs.go:282] 0 containers: []
	W1209 05:52:54.240732 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:52:54.240739 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:52:54.240830 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:52:54.264680 1437114 cri.go:89] found id: ""
	I1209 05:52:54.264710 1437114 logs.go:282] 0 containers: []
	W1209 05:52:54.264719 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:52:54.264725 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:52:54.264785 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:52:54.288715 1437114 cri.go:89] found id: ""
	I1209 05:52:54.288739 1437114 logs.go:282] 0 containers: []
	W1209 05:52:54.288748 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:52:54.288757 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:52:54.288769 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:52:54.344591 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:52:54.344629 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:52:54.360275 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:52:54.360350 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:52:54.422057 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:52:54.413842    4106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:54.414541    4106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:54.416178    4106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:54.416655    4106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:54.418204    4106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:52:54.413842    4106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:54.414541    4106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:54.416178    4106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:54.416655    4106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:54.418204    4106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:52:54.422081 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:52:54.422093 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:52:54.451978 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:52:54.452157 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:52:56.987228 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:52:56.997370 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:52:56.997440 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:52:57.026856 1437114 cri.go:89] found id: ""
	I1209 05:52:57.026878 1437114 logs.go:282] 0 containers: []
	W1209 05:52:57.026886 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:52:57.026893 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:52:57.026955 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:52:57.052417 1437114 cri.go:89] found id: ""
	I1209 05:52:57.052442 1437114 logs.go:282] 0 containers: []
	W1209 05:52:57.052450 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:52:57.052457 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:52:57.052517 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:52:57.079492 1437114 cri.go:89] found id: ""
	I1209 05:52:57.079516 1437114 logs.go:282] 0 containers: []
	W1209 05:52:57.079526 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:52:57.079532 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:52:57.079590 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:52:57.103111 1437114 cri.go:89] found id: ""
	I1209 05:52:57.103135 1437114 logs.go:282] 0 containers: []
	W1209 05:52:57.103144 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:52:57.103150 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:52:57.103212 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:52:57.129591 1437114 cri.go:89] found id: ""
	I1209 05:52:57.129616 1437114 logs.go:282] 0 containers: []
	W1209 05:52:57.129624 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:52:57.129631 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:52:57.129706 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:52:57.153092 1437114 cri.go:89] found id: ""
	I1209 05:52:57.153115 1437114 logs.go:282] 0 containers: []
	W1209 05:52:57.153124 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:52:57.153131 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:52:57.153189 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:52:57.177623 1437114 cri.go:89] found id: ""
	I1209 05:52:57.177647 1437114 logs.go:282] 0 containers: []
	W1209 05:52:57.177656 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:52:57.177662 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:52:57.177748 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:52:57.202469 1437114 cri.go:89] found id: ""
	I1209 05:52:57.202493 1437114 logs.go:282] 0 containers: []
	W1209 05:52:57.202502 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:52:57.202511 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:52:57.202550 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:52:57.260356 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:52:57.260393 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:52:57.276459 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:52:57.276539 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:52:57.343015 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:52:57.335090    4217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:57.335845    4217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:57.337423    4217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:57.337717    4217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:57.339202    4217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:52:57.335090    4217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:57.335845    4217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:57.337423    4217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:57.337717    4217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:57.339202    4217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:52:57.343037 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:52:57.343052 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:52:57.368448 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:52:57.368485 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:52:59.899132 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:52:59.909390 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:52:59.909502 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:52:59.942228 1437114 cri.go:89] found id: ""
	I1209 05:52:59.942299 1437114 logs.go:282] 0 containers: []
	W1209 05:52:59.942333 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:52:59.942354 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:52:59.942464 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:52:59.967993 1437114 cri.go:89] found id: ""
	I1209 05:52:59.968090 1437114 logs.go:282] 0 containers: []
	W1209 05:52:59.968105 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:52:59.968112 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:52:59.968183 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:53:00.004409 1437114 cri.go:89] found id: ""
	I1209 05:53:00.004444 1437114 logs.go:282] 0 containers: []
	W1209 05:53:00.004453 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:53:00.004461 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:53:00.004542 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:53:00.122181 1437114 cri.go:89] found id: ""
	I1209 05:53:00.122206 1437114 logs.go:282] 0 containers: []
	W1209 05:53:00.122216 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:53:00.122238 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:53:00.122319 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:53:00.178386 1437114 cri.go:89] found id: ""
	I1209 05:53:00.178469 1437114 logs.go:282] 0 containers: []
	W1209 05:53:00.178481 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:53:00.178488 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:53:00.178720 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:53:00.226314 1437114 cri.go:89] found id: ""
	I1209 05:53:00.226451 1437114 logs.go:282] 0 containers: []
	W1209 05:53:00.226477 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:53:00.226486 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:53:00.226568 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:53:00.271734 1437114 cri.go:89] found id: ""
	I1209 05:53:00.271771 1437114 logs.go:282] 0 containers: []
	W1209 05:53:00.271782 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:53:00.271790 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:53:00.271932 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:53:00.335362 1437114 cri.go:89] found id: ""
	I1209 05:53:00.335448 1437114 logs.go:282] 0 containers: []
	W1209 05:53:00.335466 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:53:00.335477 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:53:00.335493 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:53:00.365642 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:53:00.365684 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:53:00.400318 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:53:00.400349 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:53:00.462709 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:53:00.462752 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:53:00.480156 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:53:00.480188 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:53:00.548948 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:53:00.540982    4347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:00.541655    4347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:00.543286    4347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:00.543662    4347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:00.545115    4347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:53:00.540982    4347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:00.541655    4347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:00.543286    4347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:00.543662    4347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:00.545115    4347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:53:03.050610 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:53:03.061297 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:53:03.061406 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:53:03.090201 1437114 cri.go:89] found id: ""
	I1209 05:53:03.090232 1437114 logs.go:282] 0 containers: []
	W1209 05:53:03.090240 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:53:03.090248 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:53:03.090313 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:53:03.115399 1437114 cri.go:89] found id: ""
	I1209 05:53:03.115424 1437114 logs.go:282] 0 containers: []
	W1209 05:53:03.115432 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:53:03.115438 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:53:03.115497 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:53:03.138652 1437114 cri.go:89] found id: ""
	I1209 05:53:03.138685 1437114 logs.go:282] 0 containers: []
	W1209 05:53:03.138694 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:53:03.138700 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:53:03.138771 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:53:03.163354 1437114 cri.go:89] found id: ""
	I1209 05:53:03.163387 1437114 logs.go:282] 0 containers: []
	W1209 05:53:03.163396 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:53:03.163402 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:53:03.163467 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:53:03.189982 1437114 cri.go:89] found id: ""
	I1209 05:53:03.190008 1437114 logs.go:282] 0 containers: []
	W1209 05:53:03.190016 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:53:03.190023 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:53:03.190100 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:53:03.214072 1437114 cri.go:89] found id: ""
	I1209 05:53:03.214100 1437114 logs.go:282] 0 containers: []
	W1209 05:53:03.214109 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:53:03.214115 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:53:03.214193 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:53:03.238571 1437114 cri.go:89] found id: ""
	I1209 05:53:03.238605 1437114 logs.go:282] 0 containers: []
	W1209 05:53:03.238614 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:53:03.238620 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:53:03.238713 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:53:03.262760 1437114 cri.go:89] found id: ""
	I1209 05:53:03.262791 1437114 logs.go:282] 0 containers: []
	W1209 05:53:03.262800 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:53:03.262825 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:53:03.262848 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:53:03.278402 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:53:03.278430 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:53:03.340382 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:53:03.332086    4439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:03.332485    4439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:03.334108    4439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:03.334685    4439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:03.336430    4439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:53:03.332086    4439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:03.332485    4439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:03.334108    4439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:03.334685    4439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:03.336430    4439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:53:03.340405 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:53:03.340420 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:53:03.367157 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:53:03.367193 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:53:03.394767 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:53:03.394794 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:53:05.953212 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:53:05.965657 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:53:05.965739 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:53:06.020272 1437114 cri.go:89] found id: ""
	I1209 05:53:06.020296 1437114 logs.go:282] 0 containers: []
	W1209 05:53:06.020305 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:53:06.020311 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:53:06.020379 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:53:06.045735 1437114 cri.go:89] found id: ""
	I1209 05:53:06.045757 1437114 logs.go:282] 0 containers: []
	W1209 05:53:06.045766 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:53:06.045772 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:53:06.045832 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:53:06.072090 1437114 cri.go:89] found id: ""
	I1209 05:53:06.072119 1437114 logs.go:282] 0 containers: []
	W1209 05:53:06.072129 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:53:06.072136 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:53:06.072225 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:53:06.097096 1437114 cri.go:89] found id: ""
	I1209 05:53:06.097121 1437114 logs.go:282] 0 containers: []
	W1209 05:53:06.097130 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:53:06.097137 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:53:06.097214 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:53:06.121406 1437114 cri.go:89] found id: ""
	I1209 05:53:06.121431 1437114 logs.go:282] 0 containers: []
	W1209 05:53:06.121439 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:53:06.121446 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:53:06.121503 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:53:06.146550 1437114 cri.go:89] found id: ""
	I1209 05:53:06.146585 1437114 logs.go:282] 0 containers: []
	W1209 05:53:06.146594 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:53:06.146601 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:53:06.146667 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:53:06.173744 1437114 cri.go:89] found id: ""
	I1209 05:53:06.173779 1437114 logs.go:282] 0 containers: []
	W1209 05:53:06.173788 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:53:06.173794 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:53:06.173852 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:53:06.196867 1437114 cri.go:89] found id: ""
	I1209 05:53:06.196892 1437114 logs.go:282] 0 containers: []
	W1209 05:53:06.196901 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:53:06.196911 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:53:06.196922 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:53:06.252507 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:53:06.252544 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:53:06.268558 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:53:06.268588 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:53:06.335400 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:53:06.327269    4553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:06.327995    4553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:06.329562    4553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:06.330075    4553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:06.331590    4553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:53:06.327269    4553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:06.327995    4553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:06.329562    4553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:06.330075    4553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:06.331590    4553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:53:06.335432 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:53:06.335445 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:53:06.361277 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:53:06.361311 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:53:08.892899 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:53:08.903128 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:53:08.903197 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:53:08.927271 1437114 cri.go:89] found id: ""
	I1209 05:53:08.927347 1437114 logs.go:282] 0 containers: []
	W1209 05:53:08.927363 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:53:08.927371 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:53:08.927437 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:53:08.958272 1437114 cri.go:89] found id: ""
	I1209 05:53:08.958296 1437114 logs.go:282] 0 containers: []
	W1209 05:53:08.958305 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:53:08.958312 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:53:08.958389 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:53:08.992109 1437114 cri.go:89] found id: ""
	I1209 05:53:08.992174 1437114 logs.go:282] 0 containers: []
	W1209 05:53:08.992196 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:53:08.992217 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:53:08.992284 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:53:09.021977 1437114 cri.go:89] found id: ""
	I1209 05:53:09.022053 1437114 logs.go:282] 0 containers: []
	W1209 05:53:09.022069 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:53:09.022076 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:53:09.022135 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:53:09.045707 1437114 cri.go:89] found id: ""
	I1209 05:53:09.045731 1437114 logs.go:282] 0 containers: []
	W1209 05:53:09.045739 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:53:09.045745 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:53:09.045801 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:53:09.070070 1437114 cri.go:89] found id: ""
	I1209 05:53:09.070103 1437114 logs.go:282] 0 containers: []
	W1209 05:53:09.070112 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:53:09.070118 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:53:09.070186 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:53:09.094488 1437114 cri.go:89] found id: ""
	I1209 05:53:09.094513 1437114 logs.go:282] 0 containers: []
	W1209 05:53:09.094530 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:53:09.094537 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:53:09.094606 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:53:09.118093 1437114 cri.go:89] found id: ""
	I1209 05:53:09.118132 1437114 logs.go:282] 0 containers: []
	W1209 05:53:09.118141 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:53:09.118150 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:53:09.118161 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:53:09.179308 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:53:09.171279    4659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:09.171791    4659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:09.173320    4659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:09.173784    4659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:09.175502    4659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:53:09.171279    4659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:09.171791    4659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:09.173320    4659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:09.173784    4659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:09.175502    4659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:53:09.179376 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:53:09.179404 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:53:09.204829 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:53:09.204867 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:53:09.232053 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:53:09.232131 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:53:09.292412 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:53:09.292453 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:53:11.810473 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:53:11.820642 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:53:11.820731 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:53:11.844911 1437114 cri.go:89] found id: ""
	I1209 05:53:11.844935 1437114 logs.go:282] 0 containers: []
	W1209 05:53:11.844944 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:53:11.844951 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:53:11.845057 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:53:11.868554 1437114 cri.go:89] found id: ""
	I1209 05:53:11.868628 1437114 logs.go:282] 0 containers: []
	W1209 05:53:11.868642 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:53:11.868649 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:53:11.868713 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:53:11.893204 1437114 cri.go:89] found id: ""
	I1209 05:53:11.893229 1437114 logs.go:282] 0 containers: []
	W1209 05:53:11.893237 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:53:11.893243 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:53:11.893307 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:53:11.922205 1437114 cri.go:89] found id: ""
	I1209 05:53:11.922235 1437114 logs.go:282] 0 containers: []
	W1209 05:53:11.922244 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:53:11.922250 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:53:11.922314 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:53:11.969099 1437114 cri.go:89] found id: ""
	I1209 05:53:11.969172 1437114 logs.go:282] 0 containers: []
	W1209 05:53:11.969195 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:53:11.969222 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:53:11.969335 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:53:11.999668 1437114 cri.go:89] found id: ""
	I1209 05:53:11.999694 1437114 logs.go:282] 0 containers: []
	W1209 05:53:11.999702 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:53:11.999709 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:53:11.999798 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:53:12.027989 1437114 cri.go:89] found id: ""
	I1209 05:53:12.028053 1437114 logs.go:282] 0 containers: []
	W1209 05:53:12.028062 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:53:12.028083 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:53:12.028182 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:53:12.060174 1437114 cri.go:89] found id: ""
	I1209 05:53:12.060202 1437114 logs.go:282] 0 containers: []
	W1209 05:53:12.060211 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:53:12.060220 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:53:12.060260 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:53:12.121282 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:53:12.121323 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:53:12.137566 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:53:12.137595 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:53:12.205667 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:53:12.197778    4774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:12.198341    4774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:12.199791    4774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:12.200371    4774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:12.201936    4774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:53:12.197778    4774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:12.198341    4774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:12.199791    4774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:12.200371    4774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:12.201936    4774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:53:12.205687 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:53:12.205700 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:53:12.230499 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:53:12.230532 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:53:14.761775 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:53:14.772764 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:53:14.772836 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:53:14.796366 1437114 cri.go:89] found id: ""
	I1209 05:53:14.796391 1437114 logs.go:282] 0 containers: []
	W1209 05:53:14.796399 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:53:14.796406 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:53:14.796479 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:53:14.821766 1437114 cri.go:89] found id: ""
	I1209 05:53:14.821793 1437114 logs.go:282] 0 containers: []
	W1209 05:53:14.821802 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:53:14.821808 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:53:14.821868 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:53:14.846798 1437114 cri.go:89] found id: ""
	I1209 05:53:14.846823 1437114 logs.go:282] 0 containers: []
	W1209 05:53:14.846832 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:53:14.846838 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:53:14.846896 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:53:14.870638 1437114 cri.go:89] found id: ""
	I1209 05:53:14.870668 1437114 logs.go:282] 0 containers: []
	W1209 05:53:14.870677 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:53:14.870683 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:53:14.870741 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:53:14.894543 1437114 cri.go:89] found id: ""
	I1209 05:53:14.894571 1437114 logs.go:282] 0 containers: []
	W1209 05:53:14.894580 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:53:14.894586 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:53:14.894650 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:53:14.918572 1437114 cri.go:89] found id: ""
	I1209 05:53:14.918601 1437114 logs.go:282] 0 containers: []
	W1209 05:53:14.918610 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:53:14.918617 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:53:14.918699 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:53:14.947884 1437114 cri.go:89] found id: ""
	I1209 05:53:14.947914 1437114 logs.go:282] 0 containers: []
	W1209 05:53:14.947922 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:53:14.947928 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:53:14.948004 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:53:14.989982 1437114 cri.go:89] found id: ""
	I1209 05:53:14.990055 1437114 logs.go:282] 0 containers: []
	W1209 05:53:14.990078 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:53:14.990099 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:53:14.990137 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:53:15.012208 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:53:15.012307 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:53:15.086674 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:53:15.078145    4882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:15.078866    4882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:15.080581    4882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:15.081087    4882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:15.082649    4882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:53:15.078145    4882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:15.078866    4882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:15.080581    4882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:15.081087    4882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:15.082649    4882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:53:15.086740 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:53:15.086766 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:53:15.112587 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:53:15.112623 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:53:15.141472 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:53:15.141502 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:53:17.701838 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:53:17.713895 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:53:17.713963 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:53:17.745334 1437114 cri.go:89] found id: ""
	I1209 05:53:17.745357 1437114 logs.go:282] 0 containers: []
	W1209 05:53:17.745366 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:53:17.745372 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:53:17.745470 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:53:17.770153 1437114 cri.go:89] found id: ""
	I1209 05:53:17.770220 1437114 logs.go:282] 0 containers: []
	W1209 05:53:17.770244 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:53:17.770263 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:53:17.770326 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:53:17.795244 1437114 cri.go:89] found id: ""
	I1209 05:53:17.795278 1437114 logs.go:282] 0 containers: []
	W1209 05:53:17.795287 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:53:17.795293 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:53:17.795388 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:53:17.822017 1437114 cri.go:89] found id: ""
	I1209 05:53:17.822040 1437114 logs.go:282] 0 containers: []
	W1209 05:53:17.822049 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:53:17.822055 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:53:17.822132 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:53:17.850510 1437114 cri.go:89] found id: ""
	I1209 05:53:17.850532 1437114 logs.go:282] 0 containers: []
	W1209 05:53:17.850541 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:53:17.850566 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:53:17.850624 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:53:17.875231 1437114 cri.go:89] found id: ""
	I1209 05:53:17.875314 1437114 logs.go:282] 0 containers: []
	W1209 05:53:17.875337 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:53:17.875359 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:53:17.875488 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:53:17.901146 1437114 cri.go:89] found id: ""
	I1209 05:53:17.901169 1437114 logs.go:282] 0 containers: []
	W1209 05:53:17.901178 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:53:17.901207 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:53:17.901291 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:53:17.924362 1437114 cri.go:89] found id: ""
	I1209 05:53:17.924386 1437114 logs.go:282] 0 containers: []
	W1209 05:53:17.924395 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:53:17.924404 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:53:17.924415 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:53:17.987361 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:53:17.987403 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:53:18.004290 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:53:18.004323 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:53:18.072148 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:53:18.062877    5000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:18.063667    5000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:18.065532    5000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:18.066146    5000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:18.067899    5000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:53:18.062877    5000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:18.063667    5000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:18.065532    5000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:18.066146    5000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:18.067899    5000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:53:18.072181 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:53:18.072194 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:53:18.098033 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:53:18.098071 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:53:20.625561 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:53:20.635963 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:53:20.636053 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:53:20.659961 1437114 cri.go:89] found id: ""
	I1209 05:53:20.659984 1437114 logs.go:282] 0 containers: []
	W1209 05:53:20.659994 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:53:20.660000 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:53:20.660075 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:53:20.690085 1437114 cri.go:89] found id: ""
	I1209 05:53:20.690119 1437114 logs.go:282] 0 containers: []
	W1209 05:53:20.690128 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:53:20.690134 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:53:20.690199 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:53:20.722202 1437114 cri.go:89] found id: ""
	I1209 05:53:20.722238 1437114 logs.go:282] 0 containers: []
	W1209 05:53:20.722247 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:53:20.722254 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:53:20.722319 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:53:20.754033 1437114 cri.go:89] found id: ""
	I1209 05:53:20.754057 1437114 logs.go:282] 0 containers: []
	W1209 05:53:20.754066 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:53:20.754073 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:53:20.754157 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:53:20.778306 1437114 cri.go:89] found id: ""
	I1209 05:53:20.778332 1437114 logs.go:282] 0 containers: []
	W1209 05:53:20.778341 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:53:20.778349 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:53:20.778427 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:53:20.802477 1437114 cri.go:89] found id: ""
	I1209 05:53:20.802501 1437114 logs.go:282] 0 containers: []
	W1209 05:53:20.802510 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:53:20.802516 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:53:20.802605 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:53:20.833205 1437114 cri.go:89] found id: ""
	I1209 05:53:20.833231 1437114 logs.go:282] 0 containers: []
	W1209 05:53:20.833239 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:53:20.833246 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:53:20.833310 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:53:20.858107 1437114 cri.go:89] found id: ""
	I1209 05:53:20.858172 1437114 logs.go:282] 0 containers: []
	W1209 05:53:20.858188 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:53:20.858198 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:53:20.858209 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:53:20.914050 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:53:20.914088 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:53:20.930297 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:53:20.930326 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:53:21.009735 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:53:20.998811    5112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:20.999637    5112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:21.001322    5112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:21.001871    5112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:21.003770    5112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:53:20.998811    5112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:20.999637    5112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:21.001322    5112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:21.001871    5112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:21.003770    5112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:53:21.009759 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:53:21.009772 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:53:21.035653 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:53:21.035687 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:53:23.563248 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:53:23.574010 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:53:23.574087 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:53:23.603557 1437114 cri.go:89] found id: ""
	I1209 05:53:23.603583 1437114 logs.go:282] 0 containers: []
	W1209 05:53:23.603593 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:53:23.603599 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:53:23.603658 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:53:23.629927 1437114 cri.go:89] found id: ""
	I1209 05:53:23.629953 1437114 logs.go:282] 0 containers: []
	W1209 05:53:23.629961 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:53:23.629967 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:53:23.630029 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:53:23.654017 1437114 cri.go:89] found id: ""
	I1209 05:53:23.654042 1437114 logs.go:282] 0 containers: []
	W1209 05:53:23.654050 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:53:23.654057 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:53:23.654114 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:53:23.681104 1437114 cri.go:89] found id: ""
	I1209 05:53:23.681126 1437114 logs.go:282] 0 containers: []
	W1209 05:53:23.681134 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:53:23.681140 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:53:23.681210 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:53:23.717733 1437114 cri.go:89] found id: ""
	I1209 05:53:23.717754 1437114 logs.go:282] 0 containers: []
	W1209 05:53:23.717763 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:53:23.717769 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:53:23.717826 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:53:23.746697 1437114 cri.go:89] found id: ""
	I1209 05:53:23.746718 1437114 logs.go:282] 0 containers: []
	W1209 05:53:23.746727 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:53:23.746734 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:53:23.746791 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:53:23.771013 1437114 cri.go:89] found id: ""
	I1209 05:53:23.771035 1437114 logs.go:282] 0 containers: []
	W1209 05:53:23.771043 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:53:23.771049 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:53:23.771110 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:53:23.797671 1437114 cri.go:89] found id: ""
	I1209 05:53:23.797695 1437114 logs.go:282] 0 containers: []
	W1209 05:53:23.797705 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:53:23.797714 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:53:23.797727 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:53:23.863004 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:53:23.854866    5216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:23.855647    5216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:23.857241    5216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:23.857752    5216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:23.859306    5216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:53:23.854866    5216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:23.855647    5216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:23.857241    5216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:23.857752    5216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:23.859306    5216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:53:23.863025 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:53:23.863039 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:53:23.888849 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:53:23.888886 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:53:23.918103 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:53:23.918129 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:53:23.981103 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:53:23.981139 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:53:26.502565 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:53:26.513114 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:53:26.513204 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:53:26.536286 1437114 cri.go:89] found id: ""
	I1209 05:53:26.536352 1437114 logs.go:282] 0 containers: []
	W1209 05:53:26.536366 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:53:26.536373 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:53:26.536448 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:53:26.567137 1437114 cri.go:89] found id: ""
	I1209 05:53:26.567165 1437114 logs.go:282] 0 containers: []
	W1209 05:53:26.567174 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:53:26.567181 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:53:26.567255 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:53:26.593992 1437114 cri.go:89] found id: ""
	I1209 05:53:26.594018 1437114 logs.go:282] 0 containers: []
	W1209 05:53:26.594027 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:53:26.594033 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:53:26.594112 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:53:26.622318 1437114 cri.go:89] found id: ""
	I1209 05:53:26.622341 1437114 logs.go:282] 0 containers: []
	W1209 05:53:26.622349 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:53:26.622356 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:53:26.622436 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:53:26.647615 1437114 cri.go:89] found id: ""
	I1209 05:53:26.647689 1437114 logs.go:282] 0 containers: []
	W1209 05:53:26.647724 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:53:26.647744 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:53:26.647837 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:53:26.672100 1437114 cri.go:89] found id: ""
	I1209 05:53:26.672174 1437114 logs.go:282] 0 containers: []
	W1209 05:53:26.672189 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:53:26.672197 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:53:26.672268 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:53:26.702289 1437114 cri.go:89] found id: ""
	I1209 05:53:26.702322 1437114 logs.go:282] 0 containers: []
	W1209 05:53:26.702331 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:53:26.702355 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:53:26.702438 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:53:26.732737 1437114 cri.go:89] found id: ""
	I1209 05:53:26.732807 1437114 logs.go:282] 0 containers: []
	W1209 05:53:26.732831 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:53:26.732855 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:53:26.732894 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:53:26.749702 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:53:26.749778 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:53:26.813476 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:53:26.805499    5331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:26.805968    5331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:26.807499    5331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:26.807884    5331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:26.809521    5331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:53:26.805499    5331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:26.805968    5331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:26.807499    5331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:26.807884    5331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:26.809521    5331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:53:26.813510 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:53:26.813524 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:53:26.839545 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:53:26.839583 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:53:26.866441 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:53:26.866469 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:53:29.424166 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:53:29.435921 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:53:29.435993 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:53:29.462038 1437114 cri.go:89] found id: ""
	I1209 05:53:29.462060 1437114 logs.go:282] 0 containers: []
	W1209 05:53:29.462068 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:53:29.462074 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:53:29.462134 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:53:29.485671 1437114 cri.go:89] found id: ""
	I1209 05:53:29.485695 1437114 logs.go:282] 0 containers: []
	W1209 05:53:29.485704 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:53:29.485710 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:53:29.485765 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:53:29.508799 1437114 cri.go:89] found id: ""
	I1209 05:53:29.508829 1437114 logs.go:282] 0 containers: []
	W1209 05:53:29.508838 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:53:29.508844 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:53:29.508910 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:53:29.533027 1437114 cri.go:89] found id: ""
	I1209 05:53:29.533052 1437114 logs.go:282] 0 containers: []
	W1209 05:53:29.533060 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:53:29.533066 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:53:29.533151 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:53:29.565784 1437114 cri.go:89] found id: ""
	I1209 05:53:29.565811 1437114 logs.go:282] 0 containers: []
	W1209 05:53:29.565819 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:53:29.565825 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:53:29.565882 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:53:29.590917 1437114 cri.go:89] found id: ""
	I1209 05:53:29.590943 1437114 logs.go:282] 0 containers: []
	W1209 05:53:29.590951 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:53:29.590957 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:53:29.591014 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:53:29.618282 1437114 cri.go:89] found id: ""
	I1209 05:53:29.618307 1437114 logs.go:282] 0 containers: []
	W1209 05:53:29.618316 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:53:29.618322 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:53:29.618381 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:53:29.646902 1437114 cri.go:89] found id: ""
	I1209 05:53:29.646936 1437114 logs.go:282] 0 containers: []
	W1209 05:53:29.646946 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:53:29.646955 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:53:29.646973 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:53:29.707743 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:53:29.707828 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:53:29.724421 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:53:29.724499 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:53:29.794074 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:53:29.785906    5445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:29.786405    5445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:29.787873    5445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:29.788573    5445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:29.790227    5445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:53:29.785906    5445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:29.786405    5445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:29.787873    5445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:29.788573    5445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:29.790227    5445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:53:29.794139 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:53:29.794180 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:53:29.820222 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:53:29.820259 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:53:32.350724 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:53:32.361228 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:53:32.361300 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:53:32.389541 1437114 cri.go:89] found id: ""
	I1209 05:53:32.389564 1437114 logs.go:282] 0 containers: []
	W1209 05:53:32.389572 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:53:32.389578 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:53:32.389637 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:53:32.412985 1437114 cri.go:89] found id: ""
	I1209 05:53:32.413008 1437114 logs.go:282] 0 containers: []
	W1209 05:53:32.413017 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:53:32.413023 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:53:32.413100 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:53:32.436603 1437114 cri.go:89] found id: ""
	I1209 05:53:32.436628 1437114 logs.go:282] 0 containers: []
	W1209 05:53:32.436637 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:53:32.436644 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:53:32.436703 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:53:32.461975 1437114 cri.go:89] found id: ""
	I1209 05:53:32.462039 1437114 logs.go:282] 0 containers: []
	W1209 05:53:32.462053 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:53:32.462060 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:53:32.462122 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:53:32.485536 1437114 cri.go:89] found id: ""
	I1209 05:53:32.485560 1437114 logs.go:282] 0 containers: []
	W1209 05:53:32.485568 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:53:32.485574 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:53:32.485633 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:53:32.509130 1437114 cri.go:89] found id: ""
	I1209 05:53:32.509159 1437114 logs.go:282] 0 containers: []
	W1209 05:53:32.509168 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:53:32.509175 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:53:32.509253 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:53:32.532336 1437114 cri.go:89] found id: ""
	I1209 05:53:32.532366 1437114 logs.go:282] 0 containers: []
	W1209 05:53:32.532374 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:53:32.532381 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:53:32.532465 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:53:32.556282 1437114 cri.go:89] found id: ""
	I1209 05:53:32.556319 1437114 logs.go:282] 0 containers: []
	W1209 05:53:32.556329 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:53:32.556338 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:53:32.556352 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:53:32.572109 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:53:32.572183 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:53:32.633108 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:53:32.624780    5553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:32.625448    5553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:32.627074    5553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:32.627615    5553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:32.629220    5553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:53:32.624780    5553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:32.625448    5553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:32.627074    5553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:32.627615    5553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:32.629220    5553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:53:32.633141 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:53:32.633155 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:53:32.662184 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:53:32.662225 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:53:32.702034 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:53:32.702063 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:53:35.266899 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:53:35.277229 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:53:35.277296 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:53:35.300790 1437114 cri.go:89] found id: ""
	I1209 05:53:35.300814 1437114 logs.go:282] 0 containers: []
	W1209 05:53:35.300823 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:53:35.300830 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:53:35.300892 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:53:35.325182 1437114 cri.go:89] found id: ""
	I1209 05:53:35.325204 1437114 logs.go:282] 0 containers: []
	W1209 05:53:35.325212 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:53:35.325218 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:53:35.325280 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:53:35.353701 1437114 cri.go:89] found id: ""
	I1209 05:53:35.353727 1437114 logs.go:282] 0 containers: []
	W1209 05:53:35.353735 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:53:35.353741 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:53:35.353802 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:53:35.377248 1437114 cri.go:89] found id: ""
	I1209 05:53:35.377272 1437114 logs.go:282] 0 containers: []
	W1209 05:53:35.377281 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:53:35.377288 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:53:35.377347 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:53:35.401542 1437114 cri.go:89] found id: ""
	I1209 05:53:35.401568 1437114 logs.go:282] 0 containers: []
	W1209 05:53:35.401577 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:53:35.401584 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:53:35.401663 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:53:35.426460 1437114 cri.go:89] found id: ""
	I1209 05:53:35.426488 1437114 logs.go:282] 0 containers: []
	W1209 05:53:35.426497 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:53:35.426503 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:53:35.426561 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:53:35.454120 1437114 cri.go:89] found id: ""
	I1209 05:53:35.454145 1437114 logs.go:282] 0 containers: []
	W1209 05:53:35.454154 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:53:35.454160 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:53:35.454217 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:53:35.478639 1437114 cri.go:89] found id: ""
	I1209 05:53:35.478664 1437114 logs.go:282] 0 containers: []
	W1209 05:53:35.478673 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:53:35.478681 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:53:35.478692 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:53:35.504448 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:53:35.504487 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:53:35.533724 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:53:35.533751 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:53:35.589526 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:53:35.589560 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:53:35.605319 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:53:35.605345 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:53:35.676318 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:53:35.668651    5679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:35.669162    5679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:35.670613    5679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:35.671063    5679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:35.672483    5679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:53:35.668651    5679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:35.669162    5679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:35.670613    5679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:35.671063    5679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:35.672483    5679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:53:38.177618 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:53:38.191936 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:53:38.192007 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:53:38.225078 1437114 cri.go:89] found id: ""
	I1209 05:53:38.225117 1437114 logs.go:282] 0 containers: []
	W1209 05:53:38.225126 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:53:38.225133 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:53:38.225204 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:53:38.257246 1437114 cri.go:89] found id: ""
	I1209 05:53:38.257272 1437114 logs.go:282] 0 containers: []
	W1209 05:53:38.257281 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:53:38.257286 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:53:38.257350 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:53:38.286060 1437114 cri.go:89] found id: ""
	I1209 05:53:38.286083 1437114 logs.go:282] 0 containers: []
	W1209 05:53:38.286091 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:53:38.286097 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:53:38.286158 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:53:38.315924 1437114 cri.go:89] found id: ""
	I1209 05:53:38.315989 1437114 logs.go:282] 0 containers: []
	W1209 05:53:38.316050 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:53:38.316081 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:53:38.316148 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:53:38.340319 1437114 cri.go:89] found id: ""
	I1209 05:53:38.340348 1437114 logs.go:282] 0 containers: []
	W1209 05:53:38.340357 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:53:38.340363 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:53:38.340424 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:53:38.365184 1437114 cri.go:89] found id: ""
	I1209 05:53:38.365220 1437114 logs.go:282] 0 containers: []
	W1209 05:53:38.365229 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:53:38.365235 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:53:38.365307 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:53:38.389641 1437114 cri.go:89] found id: ""
	I1209 05:53:38.389720 1437114 logs.go:282] 0 containers: []
	W1209 05:53:38.389744 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:53:38.389759 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:53:38.389832 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:53:38.420280 1437114 cri.go:89] found id: ""
	I1209 05:53:38.420306 1437114 logs.go:282] 0 containers: []
	W1209 05:53:38.420315 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:53:38.420324 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:53:38.420353 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:53:38.476252 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:53:38.476288 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:53:38.492393 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:53:38.492472 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:53:38.557826 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:53:38.549594    5780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:38.550283    5780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:38.551905    5780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:38.552451    5780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:38.553989    5780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:53:38.549594    5780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:38.550283    5780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:38.551905    5780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:38.552451    5780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:38.553989    5780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:53:38.557849 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:53:38.557862 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:53:38.583171 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:53:38.583206 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:53:41.110406 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:53:41.120474 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:53:41.120545 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:53:41.145006 1437114 cri.go:89] found id: ""
	I1209 05:53:41.145030 1437114 logs.go:282] 0 containers: []
	W1209 05:53:41.145038 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:53:41.145044 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:53:41.145100 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:53:41.168892 1437114 cri.go:89] found id: ""
	I1209 05:53:41.168917 1437114 logs.go:282] 0 containers: []
	W1209 05:53:41.168925 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:53:41.168932 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:53:41.168989 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:53:41.206601 1437114 cri.go:89] found id: ""
	I1209 05:53:41.206630 1437114 logs.go:282] 0 containers: []
	W1209 05:53:41.206641 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:53:41.206653 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:53:41.206721 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:53:41.247172 1437114 cri.go:89] found id: ""
	I1209 05:53:41.247204 1437114 logs.go:282] 0 containers: []
	W1209 05:53:41.247212 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:53:41.247219 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:53:41.247276 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:53:41.271589 1437114 cri.go:89] found id: ""
	I1209 05:53:41.271613 1437114 logs.go:282] 0 containers: []
	W1209 05:53:41.271621 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:53:41.271628 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:53:41.271714 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:53:41.298007 1437114 cri.go:89] found id: ""
	I1209 05:53:41.298032 1437114 logs.go:282] 0 containers: []
	W1209 05:53:41.298041 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:53:41.298047 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:53:41.298105 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:53:41.325987 1437114 cri.go:89] found id: ""
	I1209 05:53:41.326010 1437114 logs.go:282] 0 containers: []
	W1209 05:53:41.326025 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:53:41.326050 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:53:41.326131 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:53:41.351424 1437114 cri.go:89] found id: ""
	I1209 05:53:41.351449 1437114 logs.go:282] 0 containers: []
	W1209 05:53:41.351457 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:53:41.351466 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:53:41.351476 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:53:41.376872 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:53:41.376906 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:53:41.405296 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:53:41.405322 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:53:41.461131 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:53:41.461167 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:53:41.477891 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:53:41.477920 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:53:41.546568 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:53:41.537827    5907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:41.538521    5907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:41.540212    5907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:41.540814    5907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:41.542724    5907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:53:41.537827    5907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:41.538521    5907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:41.540212    5907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:41.540814    5907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:41.542724    5907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:53:44.046855 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:53:44.058136 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:53:44.058209 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:53:44.086287 1437114 cri.go:89] found id: ""
	I1209 05:53:44.086311 1437114 logs.go:282] 0 containers: []
	W1209 05:53:44.086320 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:53:44.086326 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:53:44.086390 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:53:44.110388 1437114 cri.go:89] found id: ""
	I1209 05:53:44.110411 1437114 logs.go:282] 0 containers: []
	W1209 05:53:44.110419 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:53:44.110425 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:53:44.110481 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:53:44.134842 1437114 cri.go:89] found id: ""
	I1209 05:53:44.134864 1437114 logs.go:282] 0 containers: []
	W1209 05:53:44.134873 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:53:44.134879 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:53:44.134936 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:53:44.161691 1437114 cri.go:89] found id: ""
	I1209 05:53:44.161716 1437114 logs.go:282] 0 containers: []
	W1209 05:53:44.161725 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:53:44.161732 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:53:44.161789 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:53:44.195302 1437114 cri.go:89] found id: ""
	I1209 05:53:44.195326 1437114 logs.go:282] 0 containers: []
	W1209 05:53:44.195335 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:53:44.195341 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:53:44.195408 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:53:44.225882 1437114 cri.go:89] found id: ""
	I1209 05:53:44.225907 1437114 logs.go:282] 0 containers: []
	W1209 05:53:44.225916 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:53:44.225922 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:53:44.225981 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:53:44.253610 1437114 cri.go:89] found id: ""
	I1209 05:53:44.253636 1437114 logs.go:282] 0 containers: []
	W1209 05:53:44.253645 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:53:44.253655 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:53:44.253734 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:53:44.281815 1437114 cri.go:89] found id: ""
	I1209 05:53:44.281840 1437114 logs.go:282] 0 containers: []
	W1209 05:53:44.281848 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:53:44.281857 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:53:44.281868 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:53:44.339663 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:53:44.339702 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:53:44.355859 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:53:44.355938 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:53:44.429444 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:53:44.421835    6007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:44.422435    6007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:44.423949    6007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:44.424455    6007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:44.425745    6007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:53:44.421835    6007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:44.422435    6007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:44.423949    6007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:44.424455    6007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:44.425745    6007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:53:44.429466 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:53:44.429483 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:53:44.455230 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:53:44.455267 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:53:46.982212 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:53:46.993498 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:53:46.993587 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:53:47.023958 1437114 cri.go:89] found id: ""
	I1209 05:53:47.023982 1437114 logs.go:282] 0 containers: []
	W1209 05:53:47.023991 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:53:47.023997 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:53:47.024069 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:53:47.048879 1437114 cri.go:89] found id: ""
	I1209 05:53:47.048901 1437114 logs.go:282] 0 containers: []
	W1209 05:53:47.048910 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:53:47.048916 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:53:47.048983 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:53:47.073853 1437114 cri.go:89] found id: ""
	I1209 05:53:47.073878 1437114 logs.go:282] 0 containers: []
	W1209 05:53:47.073886 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:53:47.073894 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:53:47.073955 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:53:47.096844 1437114 cri.go:89] found id: ""
	I1209 05:53:47.096869 1437114 logs.go:282] 0 containers: []
	W1209 05:53:47.096877 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:53:47.096884 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:53:47.096945 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:53:47.120160 1437114 cri.go:89] found id: ""
	I1209 05:53:47.120185 1437114 logs.go:282] 0 containers: []
	W1209 05:53:47.120194 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:53:47.120200 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:53:47.120261 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:53:47.145073 1437114 cri.go:89] found id: ""
	I1209 05:53:47.145139 1437114 logs.go:282] 0 containers: []
	W1209 05:53:47.145155 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:53:47.145163 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:53:47.145226 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:53:47.168839 1437114 cri.go:89] found id: ""
	I1209 05:53:47.168862 1437114 logs.go:282] 0 containers: []
	W1209 05:53:47.168870 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:53:47.168878 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:53:47.168956 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:53:47.200241 1437114 cri.go:89] found id: ""
	I1209 05:53:47.200264 1437114 logs.go:282] 0 containers: []
	W1209 05:53:47.200272 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:53:47.200282 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:53:47.200311 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:53:47.261748 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:53:47.261783 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:53:47.277688 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:53:47.277718 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:53:47.342796 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:53:47.334710    6118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:47.335374    6118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:47.336895    6118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:47.337477    6118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:47.338953    6118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:53:47.334710    6118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:47.335374    6118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:47.336895    6118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:47.337477    6118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:47.338953    6118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:53:47.342859 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:53:47.342886 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:53:47.367837 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:53:47.367872 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:53:49.896241 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:53:49.908838 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:53:49.908918 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:53:49.942190 1437114 cri.go:89] found id: ""
	I1209 05:53:49.942212 1437114 logs.go:282] 0 containers: []
	W1209 05:53:49.942221 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:53:49.942226 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:53:49.942387 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:53:49.977371 1437114 cri.go:89] found id: ""
	I1209 05:53:49.977393 1437114 logs.go:282] 0 containers: []
	W1209 05:53:49.977401 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:53:49.977408 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:53:49.977468 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:53:50.002223 1437114 cri.go:89] found id: ""
	I1209 05:53:50.002247 1437114 logs.go:282] 0 containers: []
	W1209 05:53:50.002255 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:53:50.002262 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:53:50.002326 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:53:50.032431 1437114 cri.go:89] found id: ""
	I1209 05:53:50.032458 1437114 logs.go:282] 0 containers: []
	W1209 05:53:50.032467 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:53:50.032474 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:53:50.032535 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:53:50.062289 1437114 cri.go:89] found id: ""
	I1209 05:53:50.062314 1437114 logs.go:282] 0 containers: []
	W1209 05:53:50.062323 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:53:50.062329 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:53:50.062418 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:53:50.088271 1437114 cri.go:89] found id: ""
	I1209 05:53:50.088298 1437114 logs.go:282] 0 containers: []
	W1209 05:53:50.088307 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:53:50.088313 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:53:50.088382 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:53:50.114549 1437114 cri.go:89] found id: ""
	I1209 05:53:50.114629 1437114 logs.go:282] 0 containers: []
	W1209 05:53:50.115120 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:53:50.115137 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:53:50.115209 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:53:50.141196 1437114 cri.go:89] found id: ""
	I1209 05:53:50.141276 1437114 logs.go:282] 0 containers: []
	W1209 05:53:50.141298 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:53:50.141318 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:53:50.141353 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:53:50.198211 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:53:50.198284 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:53:50.215943 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:53:50.216047 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:53:50.281793 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:53:50.272885    6231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:50.273579    6231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:50.275295    6231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:50.275902    6231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:50.277606    6231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:53:50.272885    6231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:50.273579    6231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:50.275295    6231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:50.275902    6231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:50.277606    6231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:53:50.281814 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:53:50.281826 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:53:50.308006 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:53:50.308052 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:53:52.837556 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:53:52.848136 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:53:52.848208 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:53:52.872274 1437114 cri.go:89] found id: ""
	I1209 05:53:52.872302 1437114 logs.go:282] 0 containers: []
	W1209 05:53:52.872310 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:53:52.872317 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:53:52.872375 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:53:52.899101 1437114 cri.go:89] found id: ""
	I1209 05:53:52.899125 1437114 logs.go:282] 0 containers: []
	W1209 05:53:52.899134 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:53:52.899140 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:53:52.899199 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:53:52.926800 1437114 cri.go:89] found id: ""
	I1209 05:53:52.926825 1437114 logs.go:282] 0 containers: []
	W1209 05:53:52.926834 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:53:52.926840 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:53:52.926900 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:53:52.962012 1437114 cri.go:89] found id: ""
	I1209 05:53:52.962037 1437114 logs.go:282] 0 containers: []
	W1209 05:53:52.962055 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:53:52.962063 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:53:52.962140 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:53:52.996310 1437114 cri.go:89] found id: ""
	I1209 05:53:52.996336 1437114 logs.go:282] 0 containers: []
	W1209 05:53:52.996345 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:53:52.996351 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:53:52.996410 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:53:53.031535 1437114 cri.go:89] found id: ""
	I1209 05:53:53.031563 1437114 logs.go:282] 0 containers: []
	W1209 05:53:53.031572 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:53:53.031578 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:53:53.031637 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:53:53.059974 1437114 cri.go:89] found id: ""
	I1209 05:53:53.060004 1437114 logs.go:282] 0 containers: []
	W1209 05:53:53.060030 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:53:53.060038 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:53:53.060096 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:53:53.085290 1437114 cri.go:89] found id: ""
	I1209 05:53:53.085356 1437114 logs.go:282] 0 containers: []
	W1209 05:53:53.085386 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:53:53.085403 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:53:53.085415 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:53:53.142442 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:53:53.142477 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:53:53.159141 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:53:53.159169 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:53:53.237761 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:53:53.229474    6343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:53.230237    6343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:53.231874    6343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:53.232214    6343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:53.233652    6343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:53:53.229474    6343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:53.230237    6343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:53.231874    6343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:53.232214    6343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:53.233652    6343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:53:53.237779 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:53:53.237791 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:53:53.265602 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:53:53.265679 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:53:55.800068 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:53:55.810556 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:53:55.810627 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:53:55.836257 1437114 cri.go:89] found id: ""
	I1209 05:53:55.836280 1437114 logs.go:282] 0 containers: []
	W1209 05:53:55.836289 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:53:55.836295 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:53:55.836352 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:53:55.861759 1437114 cri.go:89] found id: ""
	I1209 05:53:55.861783 1437114 logs.go:282] 0 containers: []
	W1209 05:53:55.861792 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:53:55.861798 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:53:55.861865 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:53:55.886950 1437114 cri.go:89] found id: ""
	I1209 05:53:55.886982 1437114 logs.go:282] 0 containers: []
	W1209 05:53:55.886991 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:53:55.886997 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:53:55.887072 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:53:55.912055 1437114 cri.go:89] found id: ""
	I1209 05:53:55.912081 1437114 logs.go:282] 0 containers: []
	W1209 05:53:55.912089 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:53:55.912096 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:53:55.912162 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:53:55.949365 1437114 cri.go:89] found id: ""
	I1209 05:53:55.949431 1437114 logs.go:282] 0 containers: []
	W1209 05:53:55.949455 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:53:55.949471 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:53:55.949545 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:53:55.977916 1437114 cri.go:89] found id: ""
	I1209 05:53:55.977938 1437114 logs.go:282] 0 containers: []
	W1209 05:53:55.977946 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:53:55.977953 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:53:55.978040 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:53:56.013033 1437114 cri.go:89] found id: ""
	I1209 05:53:56.013070 1437114 logs.go:282] 0 containers: []
	W1209 05:53:56.013079 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:53:56.013086 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:53:56.013177 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:53:56.039563 1437114 cri.go:89] found id: ""
	I1209 05:53:56.039610 1437114 logs.go:282] 0 containers: []
	W1209 05:53:56.039620 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:53:56.039629 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:53:56.039641 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:53:56.065976 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:53:56.066014 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:53:56.097703 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:53:56.097732 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:53:56.156555 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:53:56.156594 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:53:56.172549 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:53:56.172576 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:53:56.257220 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:53:56.248866    6469 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:56.249574    6469 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:56.251225    6469 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:56.251719    6469 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:56.253344    6469 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:53:56.248866    6469 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:56.249574    6469 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:56.251225    6469 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:56.251719    6469 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:56.253344    6469 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:53:58.758071 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:53:58.768718 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:53:58.768796 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:53:58.793984 1437114 cri.go:89] found id: ""
	I1209 05:53:58.794007 1437114 logs.go:282] 0 containers: []
	W1209 05:53:58.794015 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:53:58.794021 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:53:58.794078 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:53:58.818550 1437114 cri.go:89] found id: ""
	I1209 05:53:58.818574 1437114 logs.go:282] 0 containers: []
	W1209 05:53:58.818582 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:53:58.818589 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:53:58.818648 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:53:58.843617 1437114 cri.go:89] found id: ""
	I1209 05:53:58.843696 1437114 logs.go:282] 0 containers: []
	W1209 05:53:58.843719 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:53:58.843738 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:53:58.843809 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:53:58.868732 1437114 cri.go:89] found id: ""
	I1209 05:53:58.868754 1437114 logs.go:282] 0 containers: []
	W1209 05:53:58.868763 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:53:58.868769 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:53:58.868823 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:53:58.892930 1437114 cri.go:89] found id: ""
	I1209 05:53:58.892953 1437114 logs.go:282] 0 containers: []
	W1209 05:53:58.892961 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:53:58.892968 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:53:58.893027 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:53:58.917833 1437114 cri.go:89] found id: ""
	I1209 05:53:58.917857 1437114 logs.go:282] 0 containers: []
	W1209 05:53:58.917865 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:53:58.917872 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:53:58.917933 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:53:58.965955 1437114 cri.go:89] found id: ""
	I1209 05:53:58.965982 1437114 logs.go:282] 0 containers: []
	W1209 05:53:58.965990 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:53:58.965996 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:53:58.966054 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:53:58.999708 1437114 cri.go:89] found id: ""
	I1209 05:53:58.999736 1437114 logs.go:282] 0 containers: []
	W1209 05:53:58.999744 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:53:58.999754 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:53:58.999764 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:53:59.065757 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:53:59.057189    6561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:59.058037    6561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:59.059660    6561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:59.060056    6561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:59.061679    6561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:53:59.057189    6561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:59.058037    6561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:59.059660    6561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:59.060056    6561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:59.061679    6561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:53:59.065776 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:53:59.065788 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:53:59.090908 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:53:59.090944 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:53:59.118148 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:53:59.118180 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:53:59.175439 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:53:59.175476 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:54:01.697656 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:54:01.712348 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:54:01.712424 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:54:01.743582 1437114 cri.go:89] found id: ""
	I1209 05:54:01.743609 1437114 logs.go:282] 0 containers: []
	W1209 05:54:01.743618 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:54:01.743625 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:54:01.743688 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:54:01.769801 1437114 cri.go:89] found id: ""
	I1209 05:54:01.769825 1437114 logs.go:282] 0 containers: []
	W1209 05:54:01.769834 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:54:01.769840 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:54:01.769896 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:54:01.798274 1437114 cri.go:89] found id: ""
	I1209 05:54:01.798299 1437114 logs.go:282] 0 containers: []
	W1209 05:54:01.798308 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:54:01.798314 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:54:01.798375 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:54:01.827182 1437114 cri.go:89] found id: ""
	I1209 05:54:01.827207 1437114 logs.go:282] 0 containers: []
	W1209 05:54:01.827215 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:54:01.827222 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:54:01.827284 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:54:01.856540 1437114 cri.go:89] found id: ""
	I1209 05:54:01.856564 1437114 logs.go:282] 0 containers: []
	W1209 05:54:01.856573 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:54:01.856579 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:54:01.856659 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:54:01.885694 1437114 cri.go:89] found id: ""
	I1209 05:54:01.885719 1437114 logs.go:282] 0 containers: []
	W1209 05:54:01.885728 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:54:01.885734 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:54:01.885808 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:54:01.915290 1437114 cri.go:89] found id: ""
	I1209 05:54:01.915318 1437114 logs.go:282] 0 containers: []
	W1209 05:54:01.915327 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:54:01.915333 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:54:01.915392 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:54:01.950840 1437114 cri.go:89] found id: ""
	I1209 05:54:01.950869 1437114 logs.go:282] 0 containers: []
	W1209 05:54:01.950878 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:54:01.950888 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:54:01.950899 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:54:02.014414 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:54:02.014453 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:54:02.032051 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:54:02.032135 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:54:02.095629 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:54:02.087393    6683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:02.088084    6683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:02.089580    6683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:02.090087    6683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:02.091647    6683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:54:02.087393    6683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:02.088084    6683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:02.089580    6683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:02.090087    6683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:02.091647    6683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:54:02.095650 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:54:02.095663 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:54:02.122511 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:54:02.122550 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:54:04.650297 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:54:04.660872 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:54:04.660943 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:54:04.687789 1437114 cri.go:89] found id: ""
	I1209 05:54:04.687819 1437114 logs.go:282] 0 containers: []
	W1209 05:54:04.687827 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:54:04.687833 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:54:04.687902 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:54:04.711324 1437114 cri.go:89] found id: ""
	I1209 05:54:04.711349 1437114 logs.go:282] 0 containers: []
	W1209 05:54:04.711357 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:54:04.711364 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:54:04.711423 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:54:04.737863 1437114 cri.go:89] found id: ""
	I1209 05:54:04.737888 1437114 logs.go:282] 0 containers: []
	W1209 05:54:04.737896 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:54:04.737902 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:54:04.737978 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:54:04.762117 1437114 cri.go:89] found id: ""
	I1209 05:54:04.762143 1437114 logs.go:282] 0 containers: []
	W1209 05:54:04.762153 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:54:04.762160 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:54:04.762242 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:54:04.786158 1437114 cri.go:89] found id: ""
	I1209 05:54:04.786181 1437114 logs.go:282] 0 containers: []
	W1209 05:54:04.786189 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:54:04.786195 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:54:04.786252 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:54:04.810657 1437114 cri.go:89] found id: ""
	I1209 05:54:04.810727 1437114 logs.go:282] 0 containers: []
	W1209 05:54:04.810758 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:54:04.810777 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:54:04.810865 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:54:04.835039 1437114 cri.go:89] found id: ""
	I1209 05:54:04.835061 1437114 logs.go:282] 0 containers: []
	W1209 05:54:04.835069 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:54:04.835075 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:54:04.835132 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:54:04.863664 1437114 cri.go:89] found id: ""
	I1209 05:54:04.863691 1437114 logs.go:282] 0 containers: []
	W1209 05:54:04.863704 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:54:04.863713 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:54:04.863724 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:54:04.889846 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:54:04.889882 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:54:04.919060 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:54:04.919086 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:54:04.995975 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:54:04.996070 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:54:05.020220 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:54:05.020254 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:54:05.088696 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:54:05.080290    6811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:05.080797    6811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:05.082535    6811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:05.082897    6811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:05.084443    6811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:54:05.080290    6811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:05.080797    6811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:05.082535    6811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:05.082897    6811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:05.084443    6811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:54:07.590606 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:54:07.601036 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:54:07.601107 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:54:07.626527 1437114 cri.go:89] found id: ""
	I1209 05:54:07.626550 1437114 logs.go:282] 0 containers: []
	W1209 05:54:07.626559 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:54:07.626566 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:54:07.626624 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:54:07.656166 1437114 cri.go:89] found id: ""
	I1209 05:54:07.656193 1437114 logs.go:282] 0 containers: []
	W1209 05:54:07.656201 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:54:07.656207 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:54:07.656272 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:54:07.682014 1437114 cri.go:89] found id: ""
	I1209 05:54:07.682038 1437114 logs.go:282] 0 containers: []
	W1209 05:54:07.682046 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:54:07.682052 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:54:07.682116 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:54:07.707210 1437114 cri.go:89] found id: ""
	I1209 05:54:07.707234 1437114 logs.go:282] 0 containers: []
	W1209 05:54:07.707242 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:54:07.707248 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:54:07.707332 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:54:07.731843 1437114 cri.go:89] found id: ""
	I1209 05:54:07.731868 1437114 logs.go:282] 0 containers: []
	W1209 05:54:07.731877 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:54:07.731892 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:54:07.731958 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:54:07.760321 1437114 cri.go:89] found id: ""
	I1209 05:54:07.760346 1437114 logs.go:282] 0 containers: []
	W1209 05:54:07.760354 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:54:07.760363 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:54:07.760424 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:54:07.786309 1437114 cri.go:89] found id: ""
	I1209 05:54:07.786330 1437114 logs.go:282] 0 containers: []
	W1209 05:54:07.786338 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:54:07.786350 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:54:07.786406 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:54:07.809182 1437114 cri.go:89] found id: ""
	I1209 05:54:07.809216 1437114 logs.go:282] 0 containers: []
	W1209 05:54:07.809225 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:54:07.809233 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:54:07.809244 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:54:07.839994 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:54:07.840050 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:54:07.898120 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:54:07.898152 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:54:07.914130 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:54:07.914234 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:54:08.009314 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:54:07.997479    6916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:07.998081    6916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:07.999634    6916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:08.000228    6916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:08.002087    6916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:54:07.997479    6916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:07.998081    6916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:07.999634    6916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:08.000228    6916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:08.002087    6916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:54:08.009391 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:54:08.009413 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:54:10.536185 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:54:10.547685 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:54:10.547757 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:54:10.571843 1437114 cri.go:89] found id: ""
	I1209 05:54:10.571865 1437114 logs.go:282] 0 containers: []
	W1209 05:54:10.571873 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:54:10.571879 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:54:10.571935 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:54:10.598065 1437114 cri.go:89] found id: ""
	I1209 05:54:10.598092 1437114 logs.go:282] 0 containers: []
	W1209 05:54:10.598101 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:54:10.598107 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:54:10.598165 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:54:10.623072 1437114 cri.go:89] found id: ""
	I1209 05:54:10.623098 1437114 logs.go:282] 0 containers: []
	W1209 05:54:10.623107 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:54:10.623113 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:54:10.623200 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:54:10.649781 1437114 cri.go:89] found id: ""
	I1209 05:54:10.649806 1437114 logs.go:282] 0 containers: []
	W1209 05:54:10.649823 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:54:10.649830 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:54:10.649886 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:54:10.677496 1437114 cri.go:89] found id: ""
	I1209 05:54:10.677529 1437114 logs.go:282] 0 containers: []
	W1209 05:54:10.677538 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:54:10.677544 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:54:10.677603 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:54:10.705951 1437114 cri.go:89] found id: ""
	I1209 05:54:10.705982 1437114 logs.go:282] 0 containers: []
	W1209 05:54:10.705991 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:54:10.705997 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:54:10.706062 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:54:10.730882 1437114 cri.go:89] found id: ""
	I1209 05:54:10.730957 1437114 logs.go:282] 0 containers: []
	W1209 05:54:10.730980 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:54:10.730998 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:54:10.731088 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:54:10.757722 1437114 cri.go:89] found id: ""
	I1209 05:54:10.757753 1437114 logs.go:282] 0 containers: []
	W1209 05:54:10.757761 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:54:10.757771 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:54:10.757784 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:54:10.817777 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:54:10.817812 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:54:10.834055 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:54:10.834083 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:54:10.898677 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:54:10.890728    7018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:10.891591    7018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:10.893093    7018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:10.893520    7018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:10.894977    7018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:54:10.890728    7018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:10.891591    7018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:10.893093    7018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:10.893520    7018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:10.894977    7018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:54:10.898700 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:54:10.898713 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:54:10.923656 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:54:10.923690 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:54:13.467228 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:54:13.477812 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:54:13.477886 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:54:13.503323 1437114 cri.go:89] found id: ""
	I1209 05:54:13.503351 1437114 logs.go:282] 0 containers: []
	W1209 05:54:13.503360 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:54:13.503367 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:54:13.503441 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:54:13.538282 1437114 cri.go:89] found id: ""
	I1209 05:54:13.538310 1437114 logs.go:282] 0 containers: []
	W1209 05:54:13.538318 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:54:13.538324 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:54:13.538382 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:54:13.565556 1437114 cri.go:89] found id: ""
	I1209 05:54:13.565584 1437114 logs.go:282] 0 containers: []
	W1209 05:54:13.565594 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:54:13.565600 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:54:13.565659 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:54:13.594477 1437114 cri.go:89] found id: ""
	I1209 05:54:13.594499 1437114 logs.go:282] 0 containers: []
	W1209 05:54:13.594508 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:54:13.594514 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:54:13.594575 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:54:13.618630 1437114 cri.go:89] found id: ""
	I1209 05:54:13.618651 1437114 logs.go:282] 0 containers: []
	W1209 05:54:13.618658 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:54:13.618664 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:54:13.618720 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:54:13.643760 1437114 cri.go:89] found id: ""
	I1209 05:54:13.643786 1437114 logs.go:282] 0 containers: []
	W1209 05:54:13.643795 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:54:13.643801 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:54:13.643858 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:54:13.669716 1437114 cri.go:89] found id: ""
	I1209 05:54:13.669741 1437114 logs.go:282] 0 containers: []
	W1209 05:54:13.669749 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:54:13.669756 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:54:13.669848 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:54:13.693820 1437114 cri.go:89] found id: ""
	I1209 05:54:13.693847 1437114 logs.go:282] 0 containers: []
	W1209 05:54:13.693855 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:54:13.693864 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:54:13.693875 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:54:13.750893 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:54:13.750940 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:54:13.767174 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:54:13.767247 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:54:13.834450 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:54:13.823547    7135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:13.824086    7135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:13.828520    7135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:13.828897    7135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:13.830390    7135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:54:13.823547    7135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:13.824086    7135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:13.828520    7135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:13.828897    7135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:13.830390    7135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:54:13.834476 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:54:13.834491 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:54:13.860109 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:54:13.860148 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:54:16.386616 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:54:16.396767 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:54:16.396835 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:54:16.421557 1437114 cri.go:89] found id: ""
	I1209 05:54:16.421580 1437114 logs.go:282] 0 containers: []
	W1209 05:54:16.421589 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:54:16.421595 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:54:16.421655 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:54:16.462411 1437114 cri.go:89] found id: ""
	I1209 05:54:16.462432 1437114 logs.go:282] 0 containers: []
	W1209 05:54:16.462441 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:54:16.462447 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:54:16.462505 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:54:16.493789 1437114 cri.go:89] found id: ""
	I1209 05:54:16.493811 1437114 logs.go:282] 0 containers: []
	W1209 05:54:16.493819 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:54:16.493825 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:54:16.493887 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:54:16.523482 1437114 cri.go:89] found id: ""
	I1209 05:54:16.523504 1437114 logs.go:282] 0 containers: []
	W1209 05:54:16.523513 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:54:16.523519 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:54:16.523578 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:54:16.548318 1437114 cri.go:89] found id: ""
	I1209 05:54:16.548354 1437114 logs.go:282] 0 containers: []
	W1209 05:54:16.548363 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:54:16.548386 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:54:16.548471 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:54:16.573131 1437114 cri.go:89] found id: ""
	I1209 05:54:16.573158 1437114 logs.go:282] 0 containers: []
	W1209 05:54:16.573167 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:54:16.573173 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:54:16.573233 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:54:16.596652 1437114 cri.go:89] found id: ""
	I1209 05:54:16.596680 1437114 logs.go:282] 0 containers: []
	W1209 05:54:16.596689 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:54:16.596695 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:54:16.596754 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:54:16.622109 1437114 cri.go:89] found id: ""
	I1209 05:54:16.622131 1437114 logs.go:282] 0 containers: []
	W1209 05:54:16.622139 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:54:16.622148 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:54:16.622160 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:54:16.637977 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:54:16.638014 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:54:16.701887 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:54:16.693598    7245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:16.694125    7245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:16.695778    7245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:16.696319    7245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:16.697759    7245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:54:16.693598    7245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:16.694125    7245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:16.695778    7245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:16.696319    7245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:16.697759    7245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:54:16.701914 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:54:16.701927 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:54:16.728328 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:54:16.728362 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:54:16.756551 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:54:16.756581 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:54:19.313862 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:54:19.323798 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:54:19.323881 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:54:19.348899 1437114 cri.go:89] found id: ""
	I1209 05:54:19.348924 1437114 logs.go:282] 0 containers: []
	W1209 05:54:19.348932 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:54:19.348939 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:54:19.348996 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:54:19.373133 1437114 cri.go:89] found id: ""
	I1209 05:54:19.373156 1437114 logs.go:282] 0 containers: []
	W1209 05:54:19.373164 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:54:19.373170 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:54:19.373226 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:54:19.397615 1437114 cri.go:89] found id: ""
	I1209 05:54:19.397642 1437114 logs.go:282] 0 containers: []
	W1209 05:54:19.397651 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:54:19.397657 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:54:19.397716 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:54:19.426484 1437114 cri.go:89] found id: ""
	I1209 05:54:19.426505 1437114 logs.go:282] 0 containers: []
	W1209 05:54:19.426513 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:54:19.426519 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:54:19.426575 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:54:19.454826 1437114 cri.go:89] found id: ""
	I1209 05:54:19.454852 1437114 logs.go:282] 0 containers: []
	W1209 05:54:19.454868 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:54:19.454874 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:54:19.454941 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:54:19.483800 1437114 cri.go:89] found id: ""
	I1209 05:54:19.483821 1437114 logs.go:282] 0 containers: []
	W1209 05:54:19.483829 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:54:19.483835 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:54:19.483890 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:54:19.510301 1437114 cri.go:89] found id: ""
	I1209 05:54:19.510322 1437114 logs.go:282] 0 containers: []
	W1209 05:54:19.510330 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:54:19.510336 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:54:19.510392 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:54:19.533740 1437114 cri.go:89] found id: ""
	I1209 05:54:19.533766 1437114 logs.go:282] 0 containers: []
	W1209 05:54:19.533775 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:54:19.533785 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:54:19.533797 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:54:19.590533 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:54:19.590609 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:54:19.607749 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:54:19.607831 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:54:19.670098 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:54:19.662273    7359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:19.663063    7359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:19.664591    7359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:19.664886    7359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:19.666309    7359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:54:19.662273    7359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:19.663063    7359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:19.664591    7359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:19.664886    7359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:19.666309    7359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:54:19.670121 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:54:19.670135 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:54:19.696365 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:54:19.696401 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:54:22.225234 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:54:22.235522 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:54:22.235590 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:54:22.260044 1437114 cri.go:89] found id: ""
	I1209 05:54:22.260067 1437114 logs.go:282] 0 containers: []
	W1209 05:54:22.260076 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:54:22.260082 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:54:22.260141 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:54:22.283666 1437114 cri.go:89] found id: ""
	I1209 05:54:22.283694 1437114 logs.go:282] 0 containers: []
	W1209 05:54:22.283702 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:54:22.283708 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:54:22.283764 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:54:22.307779 1437114 cri.go:89] found id: ""
	I1209 05:54:22.307812 1437114 logs.go:282] 0 containers: []
	W1209 05:54:22.307821 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:54:22.307827 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:54:22.307884 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:54:22.333595 1437114 cri.go:89] found id: ""
	I1209 05:54:22.333621 1437114 logs.go:282] 0 containers: []
	W1209 05:54:22.333629 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:54:22.333635 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:54:22.333692 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:54:22.357452 1437114 cri.go:89] found id: ""
	I1209 05:54:22.357476 1437114 logs.go:282] 0 containers: []
	W1209 05:54:22.357484 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:54:22.357490 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:54:22.357551 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:54:22.382107 1437114 cri.go:89] found id: ""
	I1209 05:54:22.382170 1437114 logs.go:282] 0 containers: []
	W1209 05:54:22.382184 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:54:22.382192 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:54:22.382251 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:54:22.406738 1437114 cri.go:89] found id: ""
	I1209 05:54:22.406770 1437114 logs.go:282] 0 containers: []
	W1209 05:54:22.406780 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:54:22.406787 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:54:22.406858 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:54:22.432967 1437114 cri.go:89] found id: ""
	I1209 05:54:22.433002 1437114 logs.go:282] 0 containers: []
	W1209 05:54:22.433011 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:54:22.433020 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:54:22.433030 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:54:22.496308 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:54:22.496347 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:54:22.513215 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:54:22.513243 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:54:22.576557 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:54:22.568457    7472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:22.569106    7472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:22.570813    7472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:22.571288    7472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:22.572769    7472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:54:22.568457    7472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:22.569106    7472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:22.570813    7472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:22.571288    7472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:22.572769    7472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:54:22.576620 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:54:22.576641 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:54:22.601775 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:54:22.601808 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:54:25.129209 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:54:25.140801 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:54:25.140875 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:54:25.167673 1437114 cri.go:89] found id: ""
	I1209 05:54:25.167699 1437114 logs.go:282] 0 containers: []
	W1209 05:54:25.167708 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:54:25.167714 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:54:25.167774 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:54:25.213289 1437114 cri.go:89] found id: ""
	I1209 05:54:25.213317 1437114 logs.go:282] 0 containers: []
	W1209 05:54:25.213326 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:54:25.213332 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:54:25.213394 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:54:25.251150 1437114 cri.go:89] found id: ""
	I1209 05:54:25.251173 1437114 logs.go:282] 0 containers: []
	W1209 05:54:25.251181 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:54:25.251187 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:54:25.251251 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:54:25.278324 1437114 cri.go:89] found id: ""
	I1209 05:54:25.278347 1437114 logs.go:282] 0 containers: []
	W1209 05:54:25.278355 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:54:25.278361 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:54:25.278426 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:54:25.305947 1437114 cri.go:89] found id: ""
	I1209 05:54:25.305968 1437114 logs.go:282] 0 containers: []
	W1209 05:54:25.305976 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:54:25.305982 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:54:25.306043 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:54:25.330741 1437114 cri.go:89] found id: ""
	I1209 05:54:25.330766 1437114 logs.go:282] 0 containers: []
	W1209 05:54:25.330774 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:54:25.330780 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:54:25.330842 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:54:25.357251 1437114 cri.go:89] found id: ""
	I1209 05:54:25.357289 1437114 logs.go:282] 0 containers: []
	W1209 05:54:25.357297 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:54:25.357303 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:54:25.357361 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:54:25.381550 1437114 cri.go:89] found id: ""
	I1209 05:54:25.381574 1437114 logs.go:282] 0 containers: []
	W1209 05:54:25.381582 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:54:25.381643 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:54:25.381661 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:54:25.407792 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:54:25.407826 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:54:25.444380 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:54:25.444411 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:54:25.508703 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:54:25.508739 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:54:25.525308 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:54:25.525335 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:54:25.590403 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:54:25.582560    7600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:25.583141    7600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:25.584775    7600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:25.585120    7600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:25.586571    7600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:54:25.582560    7600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:25.583141    7600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:25.584775    7600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:25.585120    7600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:25.586571    7600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:54:28.090673 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:54:28.101806 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:54:28.101927 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:54:28.126175 1437114 cri.go:89] found id: ""
	I1209 05:54:28.126210 1437114 logs.go:282] 0 containers: []
	W1209 05:54:28.126219 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:54:28.126225 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:54:28.126302 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:54:28.154842 1437114 cri.go:89] found id: ""
	I1209 05:54:28.154863 1437114 logs.go:282] 0 containers: []
	W1209 05:54:28.154872 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:54:28.154878 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:54:28.154936 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:54:28.181513 1437114 cri.go:89] found id: ""
	I1209 05:54:28.181536 1437114 logs.go:282] 0 containers: []
	W1209 05:54:28.181543 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:54:28.181550 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:54:28.181606 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:54:28.208958 1437114 cri.go:89] found id: ""
	I1209 05:54:28.208979 1437114 logs.go:282] 0 containers: []
	W1209 05:54:28.208987 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:54:28.208993 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:54:28.209051 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:54:28.236261 1437114 cri.go:89] found id: ""
	I1209 05:54:28.236288 1437114 logs.go:282] 0 containers: []
	W1209 05:54:28.236296 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:54:28.236308 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:54:28.236365 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:54:28.264550 1437114 cri.go:89] found id: ""
	I1209 05:54:28.264573 1437114 logs.go:282] 0 containers: []
	W1209 05:54:28.264582 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:54:28.264588 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:54:28.264645 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:54:28.288754 1437114 cri.go:89] found id: ""
	I1209 05:54:28.288779 1437114 logs.go:282] 0 containers: []
	W1209 05:54:28.288787 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:54:28.288805 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:54:28.288865 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:54:28.311894 1437114 cri.go:89] found id: ""
	I1209 05:54:28.311922 1437114 logs.go:282] 0 containers: []
	W1209 05:54:28.311931 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:54:28.311941 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:54:28.311952 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:54:28.368882 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:54:28.368916 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:54:28.385073 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:54:28.385102 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:54:28.453852 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:54:28.445585    7699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:28.446317    7699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:28.447999    7699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:28.448560    7699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:28.449990    7699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:54:28.445585    7699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:28.446317    7699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:28.447999    7699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:28.448560    7699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:28.449990    7699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:54:28.453912 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:54:28.453948 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:54:28.481464 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:54:28.481542 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:54:31.017971 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:54:31.028776 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:54:31.028848 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:54:31.059955 1437114 cri.go:89] found id: ""
	I1209 05:54:31.059979 1437114 logs.go:282] 0 containers: []
	W1209 05:54:31.059988 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:54:31.059995 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:54:31.060087 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:54:31.085360 1437114 cri.go:89] found id: ""
	I1209 05:54:31.085389 1437114 logs.go:282] 0 containers: []
	W1209 05:54:31.085398 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:54:31.085404 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:54:31.085466 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:54:31.112050 1437114 cri.go:89] found id: ""
	I1209 05:54:31.112083 1437114 logs.go:282] 0 containers: []
	W1209 05:54:31.112092 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:54:31.112100 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:54:31.112170 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:54:31.139102 1437114 cri.go:89] found id: ""
	I1209 05:54:31.139138 1437114 logs.go:282] 0 containers: []
	W1209 05:54:31.139147 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:54:31.139153 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:54:31.139223 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:54:31.166677 1437114 cri.go:89] found id: ""
	I1209 05:54:31.166710 1437114 logs.go:282] 0 containers: []
	W1209 05:54:31.166720 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:54:31.166727 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:54:31.166818 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:54:31.204582 1437114 cri.go:89] found id: ""
	I1209 05:54:31.204610 1437114 logs.go:282] 0 containers: []
	W1209 05:54:31.204619 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:54:31.204626 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:54:31.204693 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:54:31.242874 1437114 cri.go:89] found id: ""
	I1209 05:54:31.242900 1437114 logs.go:282] 0 containers: []
	W1209 05:54:31.242909 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:54:31.242916 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:54:31.242991 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:54:31.268196 1437114 cri.go:89] found id: ""
	I1209 05:54:31.268225 1437114 logs.go:282] 0 containers: []
	W1209 05:54:31.268234 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:54:31.268243 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:54:31.268254 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:54:31.293521 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:54:31.293559 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:54:31.321144 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:54:31.321175 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:54:31.378617 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:54:31.378656 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:54:31.394506 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:54:31.394533 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:54:31.467240 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:54:31.458393    7825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:31.459167    7825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:31.460831    7825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:31.461408    7825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:31.463045    7825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:54:31.458393    7825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:31.459167    7825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:31.460831    7825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:31.461408    7825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:31.463045    7825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:54:33.967506 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:54:33.977826 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:54:33.977902 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:54:34.002325 1437114 cri.go:89] found id: ""
	I1209 05:54:34.002351 1437114 logs.go:282] 0 containers: []
	W1209 05:54:34.002360 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:54:34.002367 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:54:34.002443 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:54:34.029888 1437114 cri.go:89] found id: ""
	I1209 05:54:34.029919 1437114 logs.go:282] 0 containers: []
	W1209 05:54:34.029928 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:54:34.029935 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:54:34.029996 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:54:34.058673 1437114 cri.go:89] found id: ""
	I1209 05:54:34.058698 1437114 logs.go:282] 0 containers: []
	W1209 05:54:34.058706 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:54:34.058712 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:54:34.058783 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:54:34.083346 1437114 cri.go:89] found id: ""
	I1209 05:54:34.083370 1437114 logs.go:282] 0 containers: []
	W1209 05:54:34.083379 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:54:34.083385 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:54:34.083453 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:54:34.108098 1437114 cri.go:89] found id: ""
	I1209 05:54:34.108126 1437114 logs.go:282] 0 containers: []
	W1209 05:54:34.108135 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:54:34.108141 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:54:34.108227 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:54:34.133779 1437114 cri.go:89] found id: ""
	I1209 05:54:34.133803 1437114 logs.go:282] 0 containers: []
	W1209 05:54:34.133812 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:54:34.133819 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:54:34.133877 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:54:34.161528 1437114 cri.go:89] found id: ""
	I1209 05:54:34.161607 1437114 logs.go:282] 0 containers: []
	W1209 05:54:34.161639 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:54:34.161662 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:54:34.161779 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:54:34.191325 1437114 cri.go:89] found id: ""
	I1209 05:54:34.191400 1437114 logs.go:282] 0 containers: []
	W1209 05:54:34.191423 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:54:34.191443 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:54:34.191493 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:54:34.258939 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:54:34.258977 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:54:34.275607 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:54:34.275640 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:54:34.346638 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:54:34.338621    7924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:34.339268    7924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:34.340363    7924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:34.340982    7924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:34.342615    7924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:54:34.338621    7924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:34.339268    7924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:34.340363    7924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:34.340982    7924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:34.342615    7924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:54:34.346709 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:54:34.346754 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:54:34.373053 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:54:34.373092 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:54:36.904183 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:54:36.914625 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:54:36.914703 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:54:36.939165 1437114 cri.go:89] found id: ""
	I1209 05:54:36.939204 1437114 logs.go:282] 0 containers: []
	W1209 05:54:36.939213 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:54:36.939220 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:54:36.939280 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:54:36.968277 1437114 cri.go:89] found id: ""
	I1209 05:54:36.968303 1437114 logs.go:282] 0 containers: []
	W1209 05:54:36.968312 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:54:36.968319 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:54:36.968379 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:54:36.993837 1437114 cri.go:89] found id: ""
	I1209 05:54:36.993866 1437114 logs.go:282] 0 containers: []
	W1209 05:54:36.993875 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:54:36.993882 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:54:36.993939 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:54:37.029321 1437114 cri.go:89] found id: ""
	I1209 05:54:37.029358 1437114 logs.go:282] 0 containers: []
	W1209 05:54:37.029370 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:54:37.029381 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:54:37.029479 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:54:37.060208 1437114 cri.go:89] found id: ""
	I1209 05:54:37.060235 1437114 logs.go:282] 0 containers: []
	W1209 05:54:37.060244 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:54:37.060251 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:54:37.060311 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:54:37.085969 1437114 cri.go:89] found id: ""
	I1209 05:54:37.085992 1437114 logs.go:282] 0 containers: []
	W1209 05:54:37.086001 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:54:37.086007 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:54:37.086066 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:54:37.114324 1437114 cri.go:89] found id: ""
	I1209 05:54:37.114357 1437114 logs.go:282] 0 containers: []
	W1209 05:54:37.114367 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:54:37.114373 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:54:37.114478 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:54:37.143312 1437114 cri.go:89] found id: ""
	I1209 05:54:37.143339 1437114 logs.go:282] 0 containers: []
	W1209 05:54:37.143348 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:54:37.143357 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:54:37.143369 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:54:37.234893 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:54:37.226773    8026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:37.227615    8026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:37.228809    8026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:37.229450    8026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:37.231054    8026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:54:37.226773    8026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:37.227615    8026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:37.228809    8026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:37.229450    8026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:37.231054    8026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:54:37.234921 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:54:37.234933 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:54:37.262601 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:54:37.262635 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:54:37.289433 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:54:37.289458 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:54:37.345400 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:54:37.345435 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:54:39.861840 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:54:39.873772 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:54:39.873850 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:54:39.901691 1437114 cri.go:89] found id: ""
	I1209 05:54:39.901714 1437114 logs.go:282] 0 containers: []
	W1209 05:54:39.901725 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:54:39.901731 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:54:39.901793 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:54:39.925900 1437114 cri.go:89] found id: ""
	I1209 05:54:39.925935 1437114 logs.go:282] 0 containers: []
	W1209 05:54:39.925944 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:54:39.925950 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:54:39.926009 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:54:39.951997 1437114 cri.go:89] found id: ""
	I1209 05:54:39.952041 1437114 logs.go:282] 0 containers: []
	W1209 05:54:39.952050 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:54:39.952056 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:54:39.952116 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:54:39.980156 1437114 cri.go:89] found id: ""
	I1209 05:54:39.980182 1437114 logs.go:282] 0 containers: []
	W1209 05:54:39.980190 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:54:39.980196 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:54:39.980255 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:54:40.007109 1437114 cri.go:89] found id: ""
	I1209 05:54:40.007136 1437114 logs.go:282] 0 containers: []
	W1209 05:54:40.007146 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:54:40.007154 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:54:40.007234 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:54:40.056170 1437114 cri.go:89] found id: ""
	I1209 05:54:40.056197 1437114 logs.go:282] 0 containers: []
	W1209 05:54:40.056207 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:54:40.056214 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:54:40.056298 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:54:40.085850 1437114 cri.go:89] found id: ""
	I1209 05:54:40.085879 1437114 logs.go:282] 0 containers: []
	W1209 05:54:40.085888 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:54:40.085894 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:54:40.085960 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:54:40.118208 1437114 cri.go:89] found id: ""
	I1209 05:54:40.118245 1437114 logs.go:282] 0 containers: []
	W1209 05:54:40.118256 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:54:40.118267 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:54:40.118281 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:54:40.195166 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:54:40.184383    8140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:40.185244    8140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:40.187445    8140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:40.188458    8140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:40.189404    8140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:54:40.184383    8140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:40.185244    8140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:40.187445    8140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:40.188458    8140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:40.189404    8140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:54:40.195189 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:54:40.195203 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:54:40.223567 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:54:40.223651 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:54:40.266759 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:54:40.266786 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:54:40.323783 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:54:40.323818 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:54:42.842021 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:54:42.852681 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:54:42.852755 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:54:42.876598 1437114 cri.go:89] found id: ""
	I1209 05:54:42.876622 1437114 logs.go:282] 0 containers: []
	W1209 05:54:42.876631 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:54:42.876637 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:54:42.876694 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:54:42.901491 1437114 cri.go:89] found id: ""
	I1209 05:54:42.901515 1437114 logs.go:282] 0 containers: []
	W1209 05:54:42.901523 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:54:42.901529 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:54:42.901588 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:54:42.930050 1437114 cri.go:89] found id: ""
	I1209 05:54:42.930077 1437114 logs.go:282] 0 containers: []
	W1209 05:54:42.930086 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:54:42.930093 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:54:42.930151 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:54:42.953794 1437114 cri.go:89] found id: ""
	I1209 05:54:42.953817 1437114 logs.go:282] 0 containers: []
	W1209 05:54:42.953825 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:54:42.953837 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:54:42.953940 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:54:42.977300 1437114 cri.go:89] found id: ""
	I1209 05:54:42.977324 1437114 logs.go:282] 0 containers: []
	W1209 05:54:42.977333 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:54:42.977339 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:54:42.977416 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:54:43.001015 1437114 cri.go:89] found id: ""
	I1209 05:54:43.001080 1437114 logs.go:282] 0 containers: []
	W1209 05:54:43.001095 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:54:43.001103 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:54:43.001169 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:54:43.026886 1437114 cri.go:89] found id: ""
	I1209 05:54:43.026910 1437114 logs.go:282] 0 containers: []
	W1209 05:54:43.026918 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:54:43.026925 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:54:43.026984 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:54:43.057227 1437114 cri.go:89] found id: ""
	I1209 05:54:43.057253 1437114 logs.go:282] 0 containers: []
	W1209 05:54:43.057271 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:54:43.057281 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:54:43.057293 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:54:43.115319 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:54:43.115357 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:54:43.131310 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:54:43.131346 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:54:43.204953 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:54:43.196603    8257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:43.197525    8257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:43.199091    8257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:43.199623    8257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:43.201121    8257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:54:43.196603    8257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:43.197525    8257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:43.199091    8257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:43.199623    8257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:43.201121    8257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:54:43.204975 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:54:43.204987 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:54:43.231713 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:54:43.231747 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:54:45.766147 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:54:45.776210 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:54:45.776285 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:54:45.804782 1437114 cri.go:89] found id: ""
	I1209 05:54:45.804810 1437114 logs.go:282] 0 containers: []
	W1209 05:54:45.804857 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:54:45.804871 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:54:45.804939 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:54:45.828660 1437114 cri.go:89] found id: ""
	I1209 05:54:45.828684 1437114 logs.go:282] 0 containers: []
	W1209 05:54:45.828692 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:54:45.828698 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:54:45.828758 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:54:45.853575 1437114 cri.go:89] found id: ""
	I1209 05:54:45.853598 1437114 logs.go:282] 0 containers: []
	W1209 05:54:45.853606 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:54:45.853612 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:54:45.853667 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:54:45.877674 1437114 cri.go:89] found id: ""
	I1209 05:54:45.877697 1437114 logs.go:282] 0 containers: []
	W1209 05:54:45.877705 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:54:45.877711 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:54:45.877775 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:54:45.902246 1437114 cri.go:89] found id: ""
	I1209 05:54:45.902270 1437114 logs.go:282] 0 containers: []
	W1209 05:54:45.902284 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:54:45.902291 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:54:45.902347 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:54:45.929443 1437114 cri.go:89] found id: ""
	I1209 05:54:45.929517 1437114 logs.go:282] 0 containers: []
	W1209 05:54:45.929532 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:54:45.929539 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:54:45.929596 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:54:45.955032 1437114 cri.go:89] found id: ""
	I1209 05:54:45.955065 1437114 logs.go:282] 0 containers: []
	W1209 05:54:45.955074 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:54:45.955081 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:54:45.955147 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:54:45.983502 1437114 cri.go:89] found id: ""
	I1209 05:54:45.983527 1437114 logs.go:282] 0 containers: []
	W1209 05:54:45.983535 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:54:45.983544 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:54:45.983555 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:54:46.049253 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:54:46.049292 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:54:46.066199 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:54:46.066229 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:54:46.133498 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:54:46.124747    8372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:46.125334    8372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:46.126986    8372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:46.127505    8372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:46.129096    8372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:54:46.124747    8372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:46.125334    8372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:46.126986    8372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:46.127505    8372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:46.129096    8372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:54:46.133521 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:54:46.133534 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:54:46.159468 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:54:46.159500 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:54:48.698046 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:54:48.710430 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:54:48.710504 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:54:48.739692 1437114 cri.go:89] found id: ""
	I1209 05:54:48.739718 1437114 logs.go:282] 0 containers: []
	W1209 05:54:48.739726 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:54:48.739733 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:54:48.739790 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:54:48.764166 1437114 cri.go:89] found id: ""
	I1209 05:54:48.764192 1437114 logs.go:282] 0 containers: []
	W1209 05:54:48.764200 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:54:48.764206 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:54:48.764264 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:54:48.788074 1437114 cri.go:89] found id: ""
	I1209 05:54:48.788097 1437114 logs.go:282] 0 containers: []
	W1209 05:54:48.788114 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:54:48.788122 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:54:48.788189 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:54:48.813373 1437114 cri.go:89] found id: ""
	I1209 05:54:48.813398 1437114 logs.go:282] 0 containers: []
	W1209 05:54:48.813407 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:54:48.813414 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:54:48.813472 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:54:48.840222 1437114 cri.go:89] found id: ""
	I1209 05:54:48.840248 1437114 logs.go:282] 0 containers: []
	W1209 05:54:48.840256 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:54:48.840270 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:54:48.840331 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:54:48.869002 1437114 cri.go:89] found id: ""
	I1209 05:54:48.869025 1437114 logs.go:282] 0 containers: []
	W1209 05:54:48.869034 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:54:48.869041 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:54:48.869098 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:54:48.897074 1437114 cri.go:89] found id: ""
	I1209 05:54:48.897100 1437114 logs.go:282] 0 containers: []
	W1209 05:54:48.897108 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:54:48.897115 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:54:48.897193 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:54:48.920534 1437114 cri.go:89] found id: ""
	I1209 05:54:48.920559 1437114 logs.go:282] 0 containers: []
	W1209 05:54:48.920567 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:54:48.920576 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:54:48.920588 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:54:48.976882 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:54:48.976918 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:54:48.992754 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:54:48.992782 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:54:49.058058 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:54:49.049574    8487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:49.050149    8487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:49.051765    8487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:49.052269    8487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:49.053870    8487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:54:49.049574    8487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:49.050149    8487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:49.051765    8487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:49.052269    8487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:49.053870    8487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:54:49.058079 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:54:49.058092 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:54:49.083543 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:54:49.083578 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:54:51.613470 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:54:51.625228 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:54:51.625329 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:54:51.651832 1437114 cri.go:89] found id: ""
	I1209 05:54:51.651863 1437114 logs.go:282] 0 containers: []
	W1209 05:54:51.651871 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:54:51.651878 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:54:51.651989 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:54:51.689430 1437114 cri.go:89] found id: ""
	I1209 05:54:51.689471 1437114 logs.go:282] 0 containers: []
	W1209 05:54:51.689480 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:54:51.689486 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:54:51.689556 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:54:51.718333 1437114 cri.go:89] found id: ""
	I1209 05:54:51.718377 1437114 logs.go:282] 0 containers: []
	W1209 05:54:51.718387 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:54:51.718394 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:54:51.718468 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:54:51.748566 1437114 cri.go:89] found id: ""
	I1209 05:54:51.748641 1437114 logs.go:282] 0 containers: []
	W1209 05:54:51.748656 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:54:51.748663 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:54:51.748732 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:54:51.773149 1437114 cri.go:89] found id: ""
	I1209 05:54:51.773175 1437114 logs.go:282] 0 containers: []
	W1209 05:54:51.773184 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:54:51.773191 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:54:51.773283 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:54:51.802227 1437114 cri.go:89] found id: ""
	I1209 05:54:51.802253 1437114 logs.go:282] 0 containers: []
	W1209 05:54:51.802262 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:54:51.802272 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:54:51.802351 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:54:51.833926 1437114 cri.go:89] found id: ""
	I1209 05:54:51.833994 1437114 logs.go:282] 0 containers: []
	W1209 05:54:51.834016 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:54:51.834036 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:54:51.834126 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:54:51.859887 1437114 cri.go:89] found id: ""
	I1209 05:54:51.859919 1437114 logs.go:282] 0 containers: []
	W1209 05:54:51.859927 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:54:51.859937 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:54:51.859948 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:54:51.876110 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:54:51.876138 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:54:51.942848 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:54:51.934424    8598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:51.935014    8598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:51.936468    8598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:51.937091    8598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:51.938535    8598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:54:51.934424    8598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:51.935014    8598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:51.936468    8598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:51.937091    8598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:51.938535    8598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:54:51.942870 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:54:51.942883 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:54:51.968433 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:54:51.968466 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:54:51.996383 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:54:51.996421 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:54:54.554719 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:54:54.565346 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:54:54.565415 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:54:54.593426 1437114 cri.go:89] found id: ""
	I1209 05:54:54.593450 1437114 logs.go:282] 0 containers: []
	W1209 05:54:54.593458 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:54:54.593464 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:54:54.593522 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:54:54.621281 1437114 cri.go:89] found id: ""
	I1209 05:54:54.621304 1437114 logs.go:282] 0 containers: []
	W1209 05:54:54.621312 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:54:54.621318 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:54:54.621376 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:54:54.646126 1437114 cri.go:89] found id: ""
	I1209 05:54:54.646194 1437114 logs.go:282] 0 containers: []
	W1209 05:54:54.646216 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:54:54.646234 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:54:54.646318 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:54:54.674944 1437114 cri.go:89] found id: ""
	I1209 05:54:54.674986 1437114 logs.go:282] 0 containers: []
	W1209 05:54:54.675011 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:54:54.675029 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:54:54.675110 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:54:54.700733 1437114 cri.go:89] found id: ""
	I1209 05:54:54.700766 1437114 logs.go:282] 0 containers: []
	W1209 05:54:54.700775 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:54:54.700781 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:54:54.700860 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:54:54.733555 1437114 cri.go:89] found id: ""
	I1209 05:54:54.733634 1437114 logs.go:282] 0 containers: []
	W1209 05:54:54.733656 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:54:54.733676 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:54:54.733777 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:54:54.759852 1437114 cri.go:89] found id: ""
	I1209 05:54:54.759926 1437114 logs.go:282] 0 containers: []
	W1209 05:54:54.759949 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:54:54.759972 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:54:54.760110 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:54:54.784303 1437114 cri.go:89] found id: ""
	I1209 05:54:54.784377 1437114 logs.go:282] 0 containers: []
	W1209 05:54:54.784392 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:54:54.784402 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:54:54.784413 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:54:54.809753 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:54:54.809790 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:54:54.836589 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:54:54.836617 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:54:54.899737 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:54:54.899784 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:54:54.915785 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:54:54.915814 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:54:54.979896 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:54:54.971488    8724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:54.971906    8724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:54.973479    8724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:54.974140    8724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:54.976063    8724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:54:54.971488    8724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:54.971906    8724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:54.973479    8724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:54.974140    8724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:54.976063    8724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:54:57.480193 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:54:57.491395 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:54:57.491473 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:54:57.518091 1437114 cri.go:89] found id: ""
	I1209 05:54:57.518114 1437114 logs.go:282] 0 containers: []
	W1209 05:54:57.518123 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:54:57.518130 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:54:57.518191 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:54:57.545921 1437114 cri.go:89] found id: ""
	I1209 05:54:57.545954 1437114 logs.go:282] 0 containers: []
	W1209 05:54:57.545962 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:54:57.545969 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:54:57.546037 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:54:57.570249 1437114 cri.go:89] found id: ""
	I1209 05:54:57.570280 1437114 logs.go:282] 0 containers: []
	W1209 05:54:57.570290 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:54:57.570296 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:54:57.570367 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:54:57.597541 1437114 cri.go:89] found id: ""
	I1209 05:54:57.597565 1437114 logs.go:282] 0 containers: []
	W1209 05:54:57.597576 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:54:57.597583 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:54:57.597639 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:54:57.625351 1437114 cri.go:89] found id: ""
	I1209 05:54:57.625374 1437114 logs.go:282] 0 containers: []
	W1209 05:54:57.625382 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:54:57.625388 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:54:57.625446 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:54:57.653430 1437114 cri.go:89] found id: ""
	I1209 05:54:57.653504 1437114 logs.go:282] 0 containers: []
	W1209 05:54:57.653520 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:54:57.653528 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:54:57.653592 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:54:57.686655 1437114 cri.go:89] found id: ""
	I1209 05:54:57.686681 1437114 logs.go:282] 0 containers: []
	W1209 05:54:57.686704 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:54:57.686711 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:54:57.686783 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:54:57.715897 1437114 cri.go:89] found id: ""
	I1209 05:54:57.715924 1437114 logs.go:282] 0 containers: []
	W1209 05:54:57.715932 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:54:57.715941 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:54:57.715952 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:54:57.781835 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:54:57.781871 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:54:57.798499 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:54:57.798527 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:54:57.870136 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:54:57.856259    8825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:57.861442    8825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:57.864278    8825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:57.864723    8825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:57.866272    8825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:54:57.856259    8825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:57.861442    8825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:57.864278    8825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:57.864723    8825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:57.866272    8825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:54:57.870169 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:54:57.870182 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:54:57.894760 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:54:57.894794 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:55:00.423491 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:55:00.436333 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:55:00.436416 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:55:00.477329 1437114 cri.go:89] found id: ""
	I1209 05:55:00.477357 1437114 logs.go:282] 0 containers: []
	W1209 05:55:00.477367 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:55:00.477373 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:55:00.477440 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:55:00.510439 1437114 cri.go:89] found id: ""
	I1209 05:55:00.510467 1437114 logs.go:282] 0 containers: []
	W1209 05:55:00.510477 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:55:00.510483 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:55:00.510565 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:55:00.539373 1437114 cri.go:89] found id: ""
	I1209 05:55:00.539404 1437114 logs.go:282] 0 containers: []
	W1209 05:55:00.539413 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:55:00.539420 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:55:00.539484 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:55:00.567440 1437114 cri.go:89] found id: ""
	I1209 05:55:00.567470 1437114 logs.go:282] 0 containers: []
	W1209 05:55:00.567479 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:55:00.567486 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:55:00.567547 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:55:00.603417 1437114 cri.go:89] found id: ""
	I1209 05:55:00.603442 1437114 logs.go:282] 0 containers: []
	W1209 05:55:00.603450 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:55:00.603456 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:55:00.603515 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:55:00.628877 1437114 cri.go:89] found id: ""
	I1209 05:55:00.628900 1437114 logs.go:282] 0 containers: []
	W1209 05:55:00.628909 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:55:00.628915 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:55:00.628972 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:55:00.657533 1437114 cri.go:89] found id: ""
	I1209 05:55:00.657562 1437114 logs.go:282] 0 containers: []
	W1209 05:55:00.657571 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:55:00.657578 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:55:00.657638 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:55:00.686066 1437114 cri.go:89] found id: ""
	I1209 05:55:00.686090 1437114 logs.go:282] 0 containers: []
	W1209 05:55:00.686099 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:55:00.686108 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:55:00.686120 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:55:00.708508 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:55:00.708588 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:55:00.777301 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:55:00.768863    8933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:00.769274    8933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:00.770892    8933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:00.771415    8933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:00.772464    8933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:55:00.768863    8933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:00.769274    8933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:00.770892    8933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:00.771415    8933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:00.772464    8933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:55:00.777372 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:55:00.777394 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:55:00.802304 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:55:00.802337 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:55:00.829410 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:55:00.829436 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:55:03.385877 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:55:03.396171 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:55:03.396238 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:55:03.420742 1437114 cri.go:89] found id: ""
	I1209 05:55:03.420767 1437114 logs.go:282] 0 containers: []
	W1209 05:55:03.420775 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:55:03.420781 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:55:03.420837 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:55:03.458835 1437114 cri.go:89] found id: ""
	I1209 05:55:03.458861 1437114 logs.go:282] 0 containers: []
	W1209 05:55:03.458869 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:55:03.458876 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:55:03.458934 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:55:03.488300 1437114 cri.go:89] found id: ""
	I1209 05:55:03.488326 1437114 logs.go:282] 0 containers: []
	W1209 05:55:03.488334 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:55:03.488340 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:55:03.488400 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:55:03.516405 1437114 cri.go:89] found id: ""
	I1209 05:55:03.516432 1437114 logs.go:282] 0 containers: []
	W1209 05:55:03.516440 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:55:03.516446 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:55:03.516506 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:55:03.545401 1437114 cri.go:89] found id: ""
	I1209 05:55:03.545467 1437114 logs.go:282] 0 containers: []
	W1209 05:55:03.545492 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:55:03.545510 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:55:03.545597 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:55:03.570243 1437114 cri.go:89] found id: ""
	I1209 05:55:03.570316 1437114 logs.go:282] 0 containers: []
	W1209 05:55:03.570342 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:55:03.570357 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:55:03.570449 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:55:03.594930 1437114 cri.go:89] found id: ""
	I1209 05:55:03.594955 1437114 logs.go:282] 0 containers: []
	W1209 05:55:03.594965 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:55:03.594971 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:55:03.595030 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:55:03.619052 1437114 cri.go:89] found id: ""
	I1209 05:55:03.619080 1437114 logs.go:282] 0 containers: []
	W1209 05:55:03.619089 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:55:03.619098 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:55:03.619114 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:55:03.676980 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:55:03.677019 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:55:03.697398 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:55:03.697427 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:55:03.769575 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:55:03.761997    9046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:03.762424    9046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:03.763695    9046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:03.764060    9046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:03.765630    9046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:55:03.761997    9046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:03.762424    9046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:03.763695    9046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:03.764060    9046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:03.765630    9046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:55:03.769607 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:55:03.769620 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:55:03.794589 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:55:03.794623 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:55:06.321615 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:55:06.331929 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:55:06.331999 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:55:06.358377 1437114 cri.go:89] found id: ""
	I1209 05:55:06.358403 1437114 logs.go:282] 0 containers: []
	W1209 05:55:06.358411 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:55:06.358418 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:55:06.358481 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:55:06.384508 1437114 cri.go:89] found id: ""
	I1209 05:55:06.384533 1437114 logs.go:282] 0 containers: []
	W1209 05:55:06.384542 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:55:06.384548 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:55:06.384607 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:55:06.408779 1437114 cri.go:89] found id: ""
	I1209 05:55:06.408801 1437114 logs.go:282] 0 containers: []
	W1209 05:55:06.408810 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:55:06.408816 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:55:06.408874 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:55:06.441591 1437114 cri.go:89] found id: ""
	I1209 05:55:06.441613 1437114 logs.go:282] 0 containers: []
	W1209 05:55:06.441622 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:55:06.441628 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:55:06.441689 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:55:06.474533 1437114 cri.go:89] found id: ""
	I1209 05:55:06.474555 1437114 logs.go:282] 0 containers: []
	W1209 05:55:06.474567 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:55:06.474574 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:55:06.474706 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:55:06.503583 1437114 cri.go:89] found id: ""
	I1209 05:55:06.503655 1437114 logs.go:282] 0 containers: []
	W1209 05:55:06.503677 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:55:06.503697 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:55:06.503785 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:55:06.529409 1437114 cri.go:89] found id: ""
	I1209 05:55:06.529434 1437114 logs.go:282] 0 containers: []
	W1209 05:55:06.529443 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:55:06.529449 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:55:06.529508 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:55:06.559184 1437114 cri.go:89] found id: ""
	I1209 05:55:06.559254 1437114 logs.go:282] 0 containers: []
	W1209 05:55:06.559289 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:55:06.559317 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:55:06.559341 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:55:06.616116 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:55:06.616152 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:55:06.632189 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:55:06.632218 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:55:06.703879 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:55:06.694883    9157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:06.695859    9157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:06.697486    9157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:06.698063    9157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:06.699592    9157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:55:06.694883    9157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:06.695859    9157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:06.697486    9157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:06.698063    9157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:06.699592    9157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:55:06.703908 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:55:06.703924 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:55:06.733107 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:55:06.733166 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:55:09.268085 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:55:09.278413 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:55:09.278488 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:55:09.301738 1437114 cri.go:89] found id: ""
	I1209 05:55:09.301764 1437114 logs.go:282] 0 containers: []
	W1209 05:55:09.301773 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:55:09.301779 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:55:09.301836 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:55:09.329939 1437114 cri.go:89] found id: ""
	I1209 05:55:09.329962 1437114 logs.go:282] 0 containers: []
	W1209 05:55:09.329970 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:55:09.329976 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:55:09.330032 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:55:09.358792 1437114 cri.go:89] found id: ""
	I1209 05:55:09.358825 1437114 logs.go:282] 0 containers: []
	W1209 05:55:09.358834 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:55:09.358840 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:55:09.358934 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:55:09.383783 1437114 cri.go:89] found id: ""
	I1209 05:55:09.383806 1437114 logs.go:282] 0 containers: []
	W1209 05:55:09.383814 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:55:09.383820 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:55:09.383881 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:55:09.409956 1437114 cri.go:89] found id: ""
	I1209 05:55:09.409982 1437114 logs.go:282] 0 containers: []
	W1209 05:55:09.409990 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:55:09.409997 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:55:09.410054 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:55:09.442388 1437114 cri.go:89] found id: ""
	I1209 05:55:09.442471 1437114 logs.go:282] 0 containers: []
	W1209 05:55:09.442502 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:55:09.442524 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:55:09.442611 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:55:09.472213 1437114 cri.go:89] found id: ""
	I1209 05:55:09.472234 1437114 logs.go:282] 0 containers: []
	W1209 05:55:09.472243 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:55:09.472249 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:55:09.472306 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:55:09.500348 1437114 cri.go:89] found id: ""
	I1209 05:55:09.500372 1437114 logs.go:282] 0 containers: []
	W1209 05:55:09.500381 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:55:09.500390 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:55:09.500401 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:55:09.556960 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:55:09.556998 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:55:09.573143 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:55:09.573173 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:55:09.641645 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:55:09.634078    9266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:09.634591    9266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:09.636259    9266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:09.636782    9266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:09.637775    9266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:55:09.634078    9266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:09.634591    9266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:09.636259    9266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:09.636782    9266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:09.637775    9266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:55:09.641669 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:55:09.641682 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:55:09.667979 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:55:09.668100 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:55:12.205096 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:55:12.215660 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:55:12.215729 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:55:12.239566 1437114 cri.go:89] found id: ""
	I1209 05:55:12.239594 1437114 logs.go:282] 0 containers: []
	W1209 05:55:12.239603 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:55:12.239609 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:55:12.239668 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:55:12.267891 1437114 cri.go:89] found id: ""
	I1209 05:55:12.267914 1437114 logs.go:282] 0 containers: []
	W1209 05:55:12.267924 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:55:12.267930 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:55:12.267992 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:55:12.296494 1437114 cri.go:89] found id: ""
	I1209 05:55:12.296523 1437114 logs.go:282] 0 containers: []
	W1209 05:55:12.296532 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:55:12.296539 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:55:12.296602 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:55:12.322105 1437114 cri.go:89] found id: ""
	I1209 05:55:12.322135 1437114 logs.go:282] 0 containers: []
	W1209 05:55:12.322144 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:55:12.322151 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:55:12.322208 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:55:12.347978 1437114 cri.go:89] found id: ""
	I1209 05:55:12.348001 1437114 logs.go:282] 0 containers: []
	W1209 05:55:12.348010 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:55:12.348038 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:55:12.348096 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:55:12.372241 1437114 cri.go:89] found id: ""
	I1209 05:55:12.372275 1437114 logs.go:282] 0 containers: []
	W1209 05:55:12.372311 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:55:12.372318 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:55:12.372384 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:55:12.397758 1437114 cri.go:89] found id: ""
	I1209 05:55:12.397784 1437114 logs.go:282] 0 containers: []
	W1209 05:55:12.397792 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:55:12.397799 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:55:12.397860 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:55:12.422922 1437114 cri.go:89] found id: ""
	I1209 05:55:12.422948 1437114 logs.go:282] 0 containers: []
	W1209 05:55:12.422958 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:55:12.422968 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:55:12.422981 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:55:12.480231 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:55:12.480268 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:55:12.497991 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:55:12.498029 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:55:12.565247 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:55:12.557686    9379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:12.558053    9379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:12.559575    9379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:12.559888    9379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:12.561291    9379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:55:12.557686    9379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:12.558053    9379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:12.559575    9379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:12.559888    9379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:12.561291    9379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:55:12.565279 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:55:12.565293 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:55:12.590420 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:55:12.590459 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:55:15.122535 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:55:15.133065 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:55:15.133140 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:55:15.158369 1437114 cri.go:89] found id: ""
	I1209 05:55:15.158393 1437114 logs.go:282] 0 containers: []
	W1209 05:55:15.158401 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:55:15.158407 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:55:15.158492 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:55:15.184526 1437114 cri.go:89] found id: ""
	I1209 05:55:15.184550 1437114 logs.go:282] 0 containers: []
	W1209 05:55:15.184558 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:55:15.184564 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:55:15.184627 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:55:15.210248 1437114 cri.go:89] found id: ""
	I1209 05:55:15.210288 1437114 logs.go:282] 0 containers: []
	W1209 05:55:15.210300 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:55:15.210312 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:55:15.210376 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:55:15.239458 1437114 cri.go:89] found id: ""
	I1209 05:55:15.239486 1437114 logs.go:282] 0 containers: []
	W1209 05:55:15.239495 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:55:15.239501 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:55:15.239560 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:55:15.265625 1437114 cri.go:89] found id: ""
	I1209 05:55:15.265649 1437114 logs.go:282] 0 containers: []
	W1209 05:55:15.265658 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:55:15.265664 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:55:15.265729 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:55:15.289543 1437114 cri.go:89] found id: ""
	I1209 05:55:15.289577 1437114 logs.go:282] 0 containers: []
	W1209 05:55:15.289587 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:55:15.289593 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:55:15.289663 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:55:15.314575 1437114 cri.go:89] found id: ""
	I1209 05:55:15.314610 1437114 logs.go:282] 0 containers: []
	W1209 05:55:15.314618 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:55:15.314625 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:55:15.314704 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:55:15.339832 1437114 cri.go:89] found id: ""
	I1209 05:55:15.339858 1437114 logs.go:282] 0 containers: []
	W1209 05:55:15.339865 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:55:15.339875 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:55:15.339890 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:55:15.356748 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:55:15.356774 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:55:15.418122 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:55:15.410189    9483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:15.410797    9483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:15.412374    9483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:15.412679    9483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:15.414149    9483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:55:15.410189    9483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:15.410797    9483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:15.412374    9483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:15.412679    9483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:15.414149    9483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:55:15.418145 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:55:15.418157 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:55:15.446826 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:55:15.446866 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:55:15.483531 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:55:15.483560 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:55:18.042444 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:55:18.053775 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:55:18.053853 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:55:18.090768 1437114 cri.go:89] found id: ""
	I1209 05:55:18.090790 1437114 logs.go:282] 0 containers: []
	W1209 05:55:18.090800 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:55:18.090806 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:55:18.090869 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:55:18.117411 1437114 cri.go:89] found id: ""
	I1209 05:55:18.117438 1437114 logs.go:282] 0 containers: []
	W1209 05:55:18.117448 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:55:18.117458 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:55:18.117516 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:55:18.143495 1437114 cri.go:89] found id: ""
	I1209 05:55:18.143523 1437114 logs.go:282] 0 containers: []
	W1209 05:55:18.143531 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:55:18.143538 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:55:18.143601 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:55:18.169282 1437114 cri.go:89] found id: ""
	I1209 05:55:18.169310 1437114 logs.go:282] 0 containers: []
	W1209 05:55:18.169319 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:55:18.169325 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:55:18.169387 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:55:18.194143 1437114 cri.go:89] found id: ""
	I1209 05:55:18.194210 1437114 logs.go:282] 0 containers: []
	W1209 05:55:18.194234 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:55:18.194248 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:55:18.194319 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:55:18.218540 1437114 cri.go:89] found id: ""
	I1209 05:55:18.218564 1437114 logs.go:282] 0 containers: []
	W1209 05:55:18.218573 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:55:18.218579 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:55:18.218635 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:55:18.242500 1437114 cri.go:89] found id: ""
	I1209 05:55:18.242533 1437114 logs.go:282] 0 containers: []
	W1209 05:55:18.242541 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:55:18.242554 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:55:18.242625 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:55:18.268163 1437114 cri.go:89] found id: ""
	I1209 05:55:18.268189 1437114 logs.go:282] 0 containers: []
	W1209 05:55:18.268198 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:55:18.268207 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:55:18.268219 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:55:18.325316 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:55:18.325352 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:55:18.341326 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:55:18.341355 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:55:18.406565 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:55:18.398134    9600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:18.398838    9600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:18.400544    9600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:18.401064    9600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:18.402624    9600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:55:18.398134    9600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:18.398838    9600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:18.400544    9600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:18.401064    9600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:18.402624    9600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:55:18.406588 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:55:18.406601 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:55:18.432715 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:55:18.433008 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:55:20.971861 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:55:20.983326 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:55:20.983402 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:55:21.009562 1437114 cri.go:89] found id: ""
	I1209 05:55:21.009588 1437114 logs.go:282] 0 containers: []
	W1209 05:55:21.009598 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:55:21.009606 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:55:21.009671 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:55:21.034329 1437114 cri.go:89] found id: ""
	I1209 05:55:21.034355 1437114 logs.go:282] 0 containers: []
	W1209 05:55:21.034364 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:55:21.034370 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:55:21.034444 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:55:21.058554 1437114 cri.go:89] found id: ""
	I1209 05:55:21.058575 1437114 logs.go:282] 0 containers: []
	W1209 05:55:21.058584 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:55:21.058592 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:55:21.058648 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:55:21.086391 1437114 cri.go:89] found id: ""
	I1209 05:55:21.086416 1437114 logs.go:282] 0 containers: []
	W1209 05:55:21.086425 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:55:21.086432 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:55:21.086495 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:55:21.113734 1437114 cri.go:89] found id: ""
	I1209 05:55:21.113757 1437114 logs.go:282] 0 containers: []
	W1209 05:55:21.113771 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:55:21.113777 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:55:21.113836 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:55:21.138081 1437114 cri.go:89] found id: ""
	I1209 05:55:21.138106 1437114 logs.go:282] 0 containers: []
	W1209 05:55:21.138115 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:55:21.138122 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:55:21.138188 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:55:21.162430 1437114 cri.go:89] found id: ""
	I1209 05:55:21.162454 1437114 logs.go:282] 0 containers: []
	W1209 05:55:21.162462 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:55:21.162468 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:55:21.162527 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:55:21.187241 1437114 cri.go:89] found id: ""
	I1209 05:55:21.187269 1437114 logs.go:282] 0 containers: []
	W1209 05:55:21.187277 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:55:21.187286 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:55:21.187298 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:55:21.243731 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:55:21.243768 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:55:21.259723 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:55:21.259752 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:55:21.331265 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:55:21.322926    9714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:21.323669    9714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:21.325163    9714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:21.325582    9714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:21.327036    9714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:55:21.322926    9714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:21.323669    9714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:21.325163    9714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:21.325582    9714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:21.327036    9714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:55:21.331287 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:55:21.331300 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:55:21.357424 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:55:21.357460 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:55:23.888418 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:55:23.899458 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:55:23.899526 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:55:23.923896 1437114 cri.go:89] found id: ""
	I1209 05:55:23.923962 1437114 logs.go:282] 0 containers: []
	W1209 05:55:23.923986 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:55:23.924004 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:55:23.924112 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:55:23.951339 1437114 cri.go:89] found id: ""
	I1209 05:55:23.951409 1437114 logs.go:282] 0 containers: []
	W1209 05:55:23.951432 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:55:23.951450 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:55:23.951535 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:55:23.980727 1437114 cri.go:89] found id: ""
	I1209 05:55:23.980797 1437114 logs.go:282] 0 containers: []
	W1209 05:55:23.980821 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:55:23.980838 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:55:23.980927 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:55:24.018661 1437114 cri.go:89] found id: ""
	I1209 05:55:24.018691 1437114 logs.go:282] 0 containers: []
	W1209 05:55:24.018702 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:55:24.018709 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:55:24.018778 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:55:24.049508 1437114 cri.go:89] found id: ""
	I1209 05:55:24.049536 1437114 logs.go:282] 0 containers: []
	W1209 05:55:24.049545 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:55:24.049551 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:55:24.049610 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:55:24.074712 1437114 cri.go:89] found id: ""
	I1209 05:55:24.074741 1437114 logs.go:282] 0 containers: []
	W1209 05:55:24.074751 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:55:24.074757 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:55:24.074825 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:55:24.100769 1437114 cri.go:89] found id: ""
	I1209 05:55:24.100795 1437114 logs.go:282] 0 containers: []
	W1209 05:55:24.100804 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:55:24.100810 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:55:24.100871 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:55:24.125003 1437114 cri.go:89] found id: ""
	I1209 05:55:24.125031 1437114 logs.go:282] 0 containers: []
	W1209 05:55:24.125039 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:55:24.125049 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:55:24.125061 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:55:24.194763 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:55:24.186517    9818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:24.187020    9818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:24.188525    9818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:24.188998    9818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:24.190667    9818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:55:24.186517    9818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:24.187020    9818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:24.188525    9818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:24.188998    9818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:24.190667    9818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:55:24.194832 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:55:24.194870 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:55:24.220205 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:55:24.220239 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:55:24.246742 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:55:24.246769 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:55:24.303551 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:55:24.303584 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:55:26.819975 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:55:26.830655 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:55:26.830725 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:55:26.858629 1437114 cri.go:89] found id: ""
	I1209 05:55:26.858653 1437114 logs.go:282] 0 containers: []
	W1209 05:55:26.858661 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:55:26.858667 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:55:26.858733 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:55:26.883327 1437114 cri.go:89] found id: ""
	I1209 05:55:26.883354 1437114 logs.go:282] 0 containers: []
	W1209 05:55:26.883363 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:55:26.883369 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:55:26.883431 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:55:26.909455 1437114 cri.go:89] found id: ""
	I1209 05:55:26.909475 1437114 logs.go:282] 0 containers: []
	W1209 05:55:26.909484 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:55:26.909490 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:55:26.909551 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:55:26.940313 1437114 cri.go:89] found id: ""
	I1209 05:55:26.940345 1437114 logs.go:282] 0 containers: []
	W1209 05:55:26.940358 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:55:26.940365 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:55:26.940432 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:55:26.974610 1437114 cri.go:89] found id: ""
	I1209 05:55:26.974686 1437114 logs.go:282] 0 containers: []
	W1209 05:55:26.974708 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:55:26.974725 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:55:26.974817 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:55:27.007512 1437114 cri.go:89] found id: ""
	I1209 05:55:27.007592 1437114 logs.go:282] 0 containers: []
	W1209 05:55:27.007616 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:55:27.007637 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:55:27.007748 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:55:27.032955 1437114 cri.go:89] found id: ""
	I1209 05:55:27.033029 1437114 logs.go:282] 0 containers: []
	W1209 05:55:27.033053 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:55:27.033071 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:55:27.033155 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:55:27.057112 1437114 cri.go:89] found id: ""
	I1209 05:55:27.057177 1437114 logs.go:282] 0 containers: []
	W1209 05:55:27.057191 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:55:27.057202 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:55:27.057219 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:55:27.118936 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:55:27.110736    9930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:27.111264    9930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:27.112691    9930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:27.112981    9930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:27.114451    9930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:55:27.110736    9930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:27.111264    9930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:27.112691    9930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:27.112981    9930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:27.114451    9930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:55:27.118961 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:55:27.118974 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:55:27.144106 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:55:27.144179 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:55:27.174234 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:55:27.174260 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:55:27.230096 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:55:27.230129 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:55:29.746369 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:55:29.756575 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:55:29.756649 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:55:29.784727 1437114 cri.go:89] found id: ""
	I1209 05:55:29.784750 1437114 logs.go:282] 0 containers: []
	W1209 05:55:29.784758 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:55:29.784764 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:55:29.784824 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:55:29.808671 1437114 cri.go:89] found id: ""
	I1209 05:55:29.808696 1437114 logs.go:282] 0 containers: []
	W1209 05:55:29.808705 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:55:29.808711 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:55:29.808793 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:55:29.832440 1437114 cri.go:89] found id: ""
	I1209 05:55:29.832470 1437114 logs.go:282] 0 containers: []
	W1209 05:55:29.832479 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:55:29.832485 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:55:29.832549 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:55:29.857781 1437114 cri.go:89] found id: ""
	I1209 05:55:29.857807 1437114 logs.go:282] 0 containers: []
	W1209 05:55:29.857815 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:55:29.857821 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:55:29.857901 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:55:29.882048 1437114 cri.go:89] found id: ""
	I1209 05:55:29.882073 1437114 logs.go:282] 0 containers: []
	W1209 05:55:29.882081 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:55:29.882087 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:55:29.882176 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:55:29.905398 1437114 cri.go:89] found id: ""
	I1209 05:55:29.905422 1437114 logs.go:282] 0 containers: []
	W1209 05:55:29.905431 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:55:29.905438 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:55:29.905526 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:55:29.931783 1437114 cri.go:89] found id: ""
	I1209 05:55:29.931816 1437114 logs.go:282] 0 containers: []
	W1209 05:55:29.931824 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:55:29.931831 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:55:29.931903 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:55:29.961633 1437114 cri.go:89] found id: ""
	I1209 05:55:29.961665 1437114 logs.go:282] 0 containers: []
	W1209 05:55:29.961673 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:55:29.961683 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:55:29.961695 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:55:30.041769 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:55:30.025451   10042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:30.026529   10042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:30.027374   10042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:30.029780   10042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:30.030693   10042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:55:30.025451   10042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:30.026529   10042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:30.027374   10042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:30.029780   10042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:30.030693   10042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:55:30.041793 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:55:30.041807 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:55:30.069912 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:55:30.069946 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:55:30.104202 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:55:30.104232 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:55:30.162750 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:55:30.162784 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:55:32.680152 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:55:32.694260 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:55:32.694425 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:55:32.728965 1437114 cri.go:89] found id: ""
	I1209 05:55:32.729064 1437114 logs.go:282] 0 containers: []
	W1209 05:55:32.729088 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:55:32.729108 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:55:32.729212 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:55:32.760196 1437114 cri.go:89] found id: ""
	I1209 05:55:32.760220 1437114 logs.go:282] 0 containers: []
	W1209 05:55:32.760228 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:55:32.760235 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:55:32.760303 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:55:32.785415 1437114 cri.go:89] found id: ""
	I1209 05:55:32.785448 1437114 logs.go:282] 0 containers: []
	W1209 05:55:32.785457 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:55:32.785463 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:55:32.785528 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:55:32.809252 1437114 cri.go:89] found id: ""
	I1209 05:55:32.809327 1437114 logs.go:282] 0 containers: []
	W1209 05:55:32.809343 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:55:32.809357 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:55:32.809417 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:55:32.834255 1437114 cri.go:89] found id: ""
	I1209 05:55:32.834281 1437114 logs.go:282] 0 containers: []
	W1209 05:55:32.834295 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:55:32.834302 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:55:32.834362 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:55:32.859400 1437114 cri.go:89] found id: ""
	I1209 05:55:32.859426 1437114 logs.go:282] 0 containers: []
	W1209 05:55:32.859443 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:55:32.859450 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:55:32.859519 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:55:32.897012 1437114 cri.go:89] found id: ""
	I1209 05:55:32.897037 1437114 logs.go:282] 0 containers: []
	W1209 05:55:32.897046 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:55:32.897053 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:55:32.897167 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:55:32.921653 1437114 cri.go:89] found id: ""
	I1209 05:55:32.921685 1437114 logs.go:282] 0 containers: []
	W1209 05:55:32.921693 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:55:32.921703 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:55:32.921713 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:55:32.948373 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:55:32.948454 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:55:32.981605 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:55:32.981678 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:55:33.043445 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:55:33.043481 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:55:33.059128 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:55:33.059160 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:55:33.122257 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:55:33.113462   10172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:33.113864   10172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:33.115638   10172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:33.116342   10172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:33.117919   10172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:55:33.113462   10172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:33.113864   10172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:33.115638   10172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:33.116342   10172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:33.117919   10172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:55:35.623296 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:55:35.635539 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:55:35.635647 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:55:35.663702 1437114 cri.go:89] found id: ""
	I1209 05:55:35.663741 1437114 logs.go:282] 0 containers: []
	W1209 05:55:35.663753 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:55:35.663760 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:55:35.663865 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:55:35.707406 1437114 cri.go:89] found id: ""
	I1209 05:55:35.707485 1437114 logs.go:282] 0 containers: []
	W1209 05:55:35.707508 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:55:35.707544 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:55:35.707629 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:55:35.734669 1437114 cri.go:89] found id: ""
	I1209 05:55:35.734749 1437114 logs.go:282] 0 containers: []
	W1209 05:55:35.734771 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:55:35.734811 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:55:35.734897 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:55:35.764935 1437114 cri.go:89] found id: ""
	I1209 05:55:35.765012 1437114 logs.go:282] 0 containers: []
	W1209 05:55:35.765036 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:55:35.765054 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:55:35.765127 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:55:35.788829 1437114 cri.go:89] found id: ""
	I1209 05:55:35.788853 1437114 logs.go:282] 0 containers: []
	W1209 05:55:35.788869 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:55:35.788876 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:55:35.788978 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:55:35.813639 1437114 cri.go:89] found id: ""
	I1209 05:55:35.813666 1437114 logs.go:282] 0 containers: []
	W1209 05:55:35.813674 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:55:35.813681 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:55:35.813787 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:55:35.843416 1437114 cri.go:89] found id: ""
	I1209 05:55:35.843460 1437114 logs.go:282] 0 containers: []
	W1209 05:55:35.843469 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:55:35.843481 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:55:35.843555 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:55:35.868194 1437114 cri.go:89] found id: ""
	I1209 05:55:35.868221 1437114 logs.go:282] 0 containers: []
	W1209 05:55:35.868231 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:55:35.868239 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:55:35.868251 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:55:35.925041 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:55:35.925080 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:55:35.951129 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:55:35.951341 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:55:36.030987 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:55:36.022457   10271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:36.023229   10271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:36.023993   10271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:36.025131   10271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:36.025699   10271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:55:36.022457   10271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:36.023229   10271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:36.023993   10271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:36.025131   10271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:36.025699   10271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:55:36.031012 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:55:36.031026 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:55:36.058849 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:55:36.058884 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:55:38.588358 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:55:38.598423 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:55:38.598488 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:55:38.622572 1437114 cri.go:89] found id: ""
	I1209 05:55:38.622596 1437114 logs.go:282] 0 containers: []
	W1209 05:55:38.622605 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:55:38.622612 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:55:38.622669 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:55:38.650917 1437114 cri.go:89] found id: ""
	I1209 05:55:38.650942 1437114 logs.go:282] 0 containers: []
	W1209 05:55:38.650950 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:55:38.650956 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:55:38.651013 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:55:38.677402 1437114 cri.go:89] found id: ""
	I1209 05:55:38.677435 1437114 logs.go:282] 0 containers: []
	W1209 05:55:38.677444 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:55:38.677451 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:55:38.677558 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:55:38.707295 1437114 cri.go:89] found id: ""
	I1209 05:55:38.707328 1437114 logs.go:282] 0 containers: []
	W1209 05:55:38.707337 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:55:38.707344 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:55:38.707453 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:55:38.740627 1437114 cri.go:89] found id: ""
	I1209 05:55:38.740652 1437114 logs.go:282] 0 containers: []
	W1209 05:55:38.740660 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:55:38.740667 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:55:38.740727 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:55:38.764991 1437114 cri.go:89] found id: ""
	I1209 05:55:38.765017 1437114 logs.go:282] 0 containers: []
	W1209 05:55:38.765027 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:55:38.765033 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:55:38.765095 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:55:38.789303 1437114 cri.go:89] found id: ""
	I1209 05:55:38.789328 1437114 logs.go:282] 0 containers: []
	W1209 05:55:38.789336 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:55:38.789343 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:55:38.789401 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:55:38.812509 1437114 cri.go:89] found id: ""
	I1209 05:55:38.812533 1437114 logs.go:282] 0 containers: []
	W1209 05:55:38.812541 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:55:38.812551 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:55:38.812562 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:55:38.869277 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:55:38.869309 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:55:38.885634 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:55:38.885663 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:55:38.967787 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:55:38.957406   10382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:38.958335   10382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:38.960358   10382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:38.961032   10382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:38.963013   10382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:55:38.957406   10382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:38.958335   10382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:38.960358   10382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:38.961032   10382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:38.963013   10382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:55:38.967812 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:55:38.967828 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:55:39.000576 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:55:39.000615 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:55:41.533393 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:55:41.544133 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:55:41.544208 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:55:41.569392 1437114 cri.go:89] found id: ""
	I1209 05:55:41.569418 1437114 logs.go:282] 0 containers: []
	W1209 05:55:41.569428 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:55:41.569436 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:55:41.569499 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:55:41.595491 1437114 cri.go:89] found id: ""
	I1209 05:55:41.595517 1437114 logs.go:282] 0 containers: []
	W1209 05:55:41.595526 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:55:41.595532 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:55:41.595592 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:55:41.622211 1437114 cri.go:89] found id: ""
	I1209 05:55:41.622246 1437114 logs.go:282] 0 containers: []
	W1209 05:55:41.622256 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:55:41.622263 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:55:41.622323 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:55:41.646745 1437114 cri.go:89] found id: ""
	I1209 05:55:41.646770 1437114 logs.go:282] 0 containers: []
	W1209 05:55:41.646779 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:55:41.646785 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:55:41.646846 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:55:41.674665 1437114 cri.go:89] found id: ""
	I1209 05:55:41.674689 1437114 logs.go:282] 0 containers: []
	W1209 05:55:41.674699 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:55:41.674706 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:55:41.674768 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:55:41.702586 1437114 cri.go:89] found id: ""
	I1209 05:55:41.702610 1437114 logs.go:282] 0 containers: []
	W1209 05:55:41.702619 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:55:41.702628 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:55:41.702704 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:55:41.741493 1437114 cri.go:89] found id: ""
	I1209 05:55:41.741515 1437114 logs.go:282] 0 containers: []
	W1209 05:55:41.741523 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:55:41.741530 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:55:41.741666 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:55:41.768353 1437114 cri.go:89] found id: ""
	I1209 05:55:41.768465 1437114 logs.go:282] 0 containers: []
	W1209 05:55:41.768479 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:55:41.768490 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:55:41.768529 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:55:41.831484 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:55:41.823412   10488 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:41.824163   10488 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:41.825769   10488 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:41.826063   10488 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:41.827557   10488 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:55:41.823412   10488 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:41.824163   10488 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:41.825769   10488 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:41.826063   10488 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:41.827557   10488 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:55:41.831504 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:55:41.831517 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:55:41.857187 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:55:41.857222 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:55:41.887092 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:55:41.887123 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:55:41.943306 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:55:41.943341 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:55:44.461424 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:55:44.472240 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:55:44.472340 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:55:44.498935 1437114 cri.go:89] found id: ""
	I1209 05:55:44.498961 1437114 logs.go:282] 0 containers: []
	W1209 05:55:44.498970 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:55:44.498976 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:55:44.499034 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:55:44.523535 1437114 cri.go:89] found id: ""
	I1209 05:55:44.523564 1437114 logs.go:282] 0 containers: []
	W1209 05:55:44.523573 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:55:44.523579 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:55:44.523637 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:55:44.548432 1437114 cri.go:89] found id: ""
	I1209 05:55:44.548455 1437114 logs.go:282] 0 containers: []
	W1209 05:55:44.548463 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:55:44.548469 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:55:44.548526 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:55:44.573002 1437114 cri.go:89] found id: ""
	I1209 05:55:44.573024 1437114 logs.go:282] 0 containers: []
	W1209 05:55:44.573034 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:55:44.573040 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:55:44.573098 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:55:44.596807 1437114 cri.go:89] found id: ""
	I1209 05:55:44.596829 1437114 logs.go:282] 0 containers: []
	W1209 05:55:44.596838 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:55:44.596846 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:55:44.596901 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:55:44.624387 1437114 cri.go:89] found id: ""
	I1209 05:55:44.624456 1437114 logs.go:282] 0 containers: []
	W1209 05:55:44.624478 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:55:44.624492 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:55:44.624571 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:55:44.648117 1437114 cri.go:89] found id: ""
	I1209 05:55:44.648143 1437114 logs.go:282] 0 containers: []
	W1209 05:55:44.648151 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:55:44.648158 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:55:44.648229 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:55:44.671908 1437114 cri.go:89] found id: ""
	I1209 05:55:44.671939 1437114 logs.go:282] 0 containers: []
	W1209 05:55:44.671948 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:55:44.671972 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:55:44.671989 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:55:44.732458 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:55:44.732536 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:55:44.753248 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:55:44.753273 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:55:44.822117 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:55:44.814788   10605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:44.815161   10605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:44.816602   10605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:44.816898   10605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:44.818170   10605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:55:44.814788   10605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:44.815161   10605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:44.816602   10605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:44.816898   10605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:44.818170   10605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:55:44.822137 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:55:44.822149 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:55:44.848565 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:55:44.848600 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:55:47.376875 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:55:47.386961 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:55:47.387031 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:55:47.413420 1437114 cri.go:89] found id: ""
	I1209 05:55:47.413444 1437114 logs.go:282] 0 containers: []
	W1209 05:55:47.413452 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:55:47.413458 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:55:47.413519 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:55:47.441969 1437114 cri.go:89] found id: ""
	I1209 05:55:47.442001 1437114 logs.go:282] 0 containers: []
	W1209 05:55:47.442010 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:55:47.442016 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:55:47.442081 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:55:47.465166 1437114 cri.go:89] found id: ""
	I1209 05:55:47.465195 1437114 logs.go:282] 0 containers: []
	W1209 05:55:47.465210 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:55:47.465216 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:55:47.465283 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:55:47.493704 1437114 cri.go:89] found id: ""
	I1209 05:55:47.493730 1437114 logs.go:282] 0 containers: []
	W1209 05:55:47.493739 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:55:47.493745 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:55:47.493821 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:55:47.519554 1437114 cri.go:89] found id: ""
	I1209 05:55:47.519589 1437114 logs.go:282] 0 containers: []
	W1209 05:55:47.519598 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:55:47.519604 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:55:47.519671 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:55:47.549334 1437114 cri.go:89] found id: ""
	I1209 05:55:47.549367 1437114 logs.go:282] 0 containers: []
	W1209 05:55:47.549376 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:55:47.549383 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:55:47.549456 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:55:47.578946 1437114 cri.go:89] found id: ""
	I1209 05:55:47.578980 1437114 logs.go:282] 0 containers: []
	W1209 05:55:47.578989 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:55:47.578995 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:55:47.579062 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:55:47.603683 1437114 cri.go:89] found id: ""
	I1209 05:55:47.603716 1437114 logs.go:282] 0 containers: []
	W1209 05:55:47.603725 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:55:47.603734 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:55:47.603745 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:55:47.619447 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:55:47.619482 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:55:47.687529 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:55:47.675579   10710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:47.676174   10710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:47.679656   10710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:47.680257   10710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:47.681964   10710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:55:47.675579   10710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:47.676174   10710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:47.679656   10710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:47.680257   10710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:47.681964   10710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:55:47.687594 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:55:47.687641 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:55:47.715721 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:55:47.715792 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:55:47.745866 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:55:47.745889 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:55:50.305015 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:55:50.315642 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:55:50.315787 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:55:50.341274 1437114 cri.go:89] found id: ""
	I1209 05:55:50.341298 1437114 logs.go:282] 0 containers: []
	W1209 05:55:50.341306 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:55:50.341314 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:55:50.341370 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:55:50.366500 1437114 cri.go:89] found id: ""
	I1209 05:55:50.366533 1437114 logs.go:282] 0 containers: []
	W1209 05:55:50.366542 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:55:50.366548 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:55:50.366613 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:55:50.390751 1437114 cri.go:89] found id: ""
	I1209 05:55:50.390787 1437114 logs.go:282] 0 containers: []
	W1209 05:55:50.390796 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:55:50.390802 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:55:50.390867 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:55:50.418576 1437114 cri.go:89] found id: ""
	I1209 05:55:50.418601 1437114 logs.go:282] 0 containers: []
	W1209 05:55:50.418610 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:55:50.418616 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:55:50.418683 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:55:50.447207 1437114 cri.go:89] found id: ""
	I1209 05:55:50.447250 1437114 logs.go:282] 0 containers: []
	W1209 05:55:50.447261 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:55:50.447267 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:55:50.447339 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:55:50.476321 1437114 cri.go:89] found id: ""
	I1209 05:55:50.476346 1437114 logs.go:282] 0 containers: []
	W1209 05:55:50.476354 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:55:50.476372 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:55:50.476430 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:55:50.501331 1437114 cri.go:89] found id: ""
	I1209 05:55:50.501356 1437114 logs.go:282] 0 containers: []
	W1209 05:55:50.501365 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:55:50.501371 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:55:50.501439 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:55:50.525182 1437114 cri.go:89] found id: ""
	I1209 05:55:50.525207 1437114 logs.go:282] 0 containers: []
	W1209 05:55:50.525215 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:55:50.525224 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:55:50.525262 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:55:50.584512 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:55:50.584550 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:55:50.600341 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:55:50.600369 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:55:50.667248 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:55:50.658895   10823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:50.659509   10823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:50.661016   10823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:50.661529   10823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:50.663114   10823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:55:50.658895   10823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:50.659509   10823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:50.661016   10823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:50.661529   10823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:50.663114   10823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:55:50.667314 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:55:50.667346 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:55:50.695874 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:55:50.695911 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:55:53.232139 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:55:53.242299 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:55:53.242369 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:55:53.265738 1437114 cri.go:89] found id: ""
	I1209 05:55:53.265763 1437114 logs.go:282] 0 containers: []
	W1209 05:55:53.265771 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:55:53.265777 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:55:53.265834 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:55:53.289547 1437114 cri.go:89] found id: ""
	I1209 05:55:53.289571 1437114 logs.go:282] 0 containers: []
	W1209 05:55:53.289580 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:55:53.289586 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:55:53.289644 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:55:53.314432 1437114 cri.go:89] found id: ""
	I1209 05:55:53.314457 1437114 logs.go:282] 0 containers: []
	W1209 05:55:53.314466 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:55:53.314472 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:55:53.314529 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:55:53.338078 1437114 cri.go:89] found id: ""
	I1209 05:55:53.338100 1437114 logs.go:282] 0 containers: []
	W1209 05:55:53.338109 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:55:53.338115 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:55:53.338190 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:55:53.362597 1437114 cri.go:89] found id: ""
	I1209 05:55:53.362623 1437114 logs.go:282] 0 containers: []
	W1209 05:55:53.362632 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:55:53.362638 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:55:53.362700 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:55:53.387075 1437114 cri.go:89] found id: ""
	I1209 05:55:53.387100 1437114 logs.go:282] 0 containers: []
	W1209 05:55:53.387108 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:55:53.387115 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:55:53.387181 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:55:53.410813 1437114 cri.go:89] found id: ""
	I1209 05:55:53.410836 1437114 logs.go:282] 0 containers: []
	W1209 05:55:53.410845 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:55:53.410850 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:55:53.410910 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:55:53.439085 1437114 cri.go:89] found id: ""
	I1209 05:55:53.439107 1437114 logs.go:282] 0 containers: []
	W1209 05:55:53.439115 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:55:53.439124 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:55:53.439135 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:55:53.496416 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:55:53.496450 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:55:53.512950 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:55:53.512979 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:55:53.592134 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:55:53.583228   10933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:53.583903   10933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:53.585634   10933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:53.586183   10933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:53.587806   10933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:55:53.583228   10933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:53.583903   10933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:53.585634   10933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:53.586183   10933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:53.587806   10933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:55:53.592155 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:55:53.592168 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:55:53.620855 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:55:53.620901 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:55:56.151858 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:55:56.162360 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:55:56.162444 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:55:56.192447 1437114 cri.go:89] found id: ""
	I1209 05:55:56.192474 1437114 logs.go:282] 0 containers: []
	W1209 05:55:56.192482 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:55:56.192488 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:55:56.192545 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:55:56.230900 1437114 cri.go:89] found id: ""
	I1209 05:55:56.230927 1437114 logs.go:282] 0 containers: []
	W1209 05:55:56.230935 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:55:56.230941 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:55:56.231005 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:55:56.264649 1437114 cri.go:89] found id: ""
	I1209 05:55:56.264673 1437114 logs.go:282] 0 containers: []
	W1209 05:55:56.264683 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:55:56.264689 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:55:56.264748 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:55:56.287754 1437114 cri.go:89] found id: ""
	I1209 05:55:56.287780 1437114 logs.go:282] 0 containers: []
	W1209 05:55:56.287788 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:55:56.287794 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:55:56.287851 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:55:56.311939 1437114 cri.go:89] found id: ""
	I1209 05:55:56.311966 1437114 logs.go:282] 0 containers: []
	W1209 05:55:56.311974 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:55:56.311981 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:55:56.312071 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:55:56.336812 1437114 cri.go:89] found id: ""
	I1209 05:55:56.336847 1437114 logs.go:282] 0 containers: []
	W1209 05:55:56.336856 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:55:56.336862 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:55:56.336926 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:55:56.364355 1437114 cri.go:89] found id: ""
	I1209 05:55:56.364378 1437114 logs.go:282] 0 containers: []
	W1209 05:55:56.364387 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:55:56.364394 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:55:56.364451 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:55:56.388220 1437114 cri.go:89] found id: ""
	I1209 05:55:56.388242 1437114 logs.go:282] 0 containers: []
	W1209 05:55:56.388251 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:55:56.388260 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:55:56.388272 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:55:56.451922 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:55:56.443739   11039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:56.444234   11039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:56.445911   11039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:56.446440   11039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:56.448091   11039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:55:56.443739   11039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:56.444234   11039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:56.445911   11039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:56.446440   11039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:56.448091   11039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:55:56.451941 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:55:56.451955 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:55:56.477213 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:55:56.477256 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:55:56.504874 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:55:56.504908 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:55:56.561753 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:55:56.561793 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:55:59.078916 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:55:59.089470 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:55:59.089545 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:55:59.113298 1437114 cri.go:89] found id: ""
	I1209 05:55:59.113324 1437114 logs.go:282] 0 containers: []
	W1209 05:55:59.113332 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:55:59.113339 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:55:59.113402 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:55:59.141250 1437114 cri.go:89] found id: ""
	I1209 05:55:59.141278 1437114 logs.go:282] 0 containers: []
	W1209 05:55:59.141286 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:55:59.141292 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:55:59.141351 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:55:59.170020 1437114 cri.go:89] found id: ""
	I1209 05:55:59.170044 1437114 logs.go:282] 0 containers: []
	W1209 05:55:59.170052 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:55:59.170059 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:55:59.170122 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:55:59.210757 1437114 cri.go:89] found id: ""
	I1209 05:55:59.210792 1437114 logs.go:282] 0 containers: []
	W1209 05:55:59.210801 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:55:59.210808 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:55:59.210873 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:55:59.237433 1437114 cri.go:89] found id: ""
	I1209 05:55:59.237470 1437114 logs.go:282] 0 containers: []
	W1209 05:55:59.237479 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:55:59.237486 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:55:59.237551 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:55:59.263923 1437114 cri.go:89] found id: ""
	I1209 05:55:59.263959 1437114 logs.go:282] 0 containers: []
	W1209 05:55:59.263968 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:55:59.263975 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:55:59.264071 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:55:59.288850 1437114 cri.go:89] found id: ""
	I1209 05:55:59.288916 1437114 logs.go:282] 0 containers: []
	W1209 05:55:59.288940 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:55:59.288954 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:55:59.289029 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:55:59.316320 1437114 cri.go:89] found id: ""
	I1209 05:55:59.316347 1437114 logs.go:282] 0 containers: []
	W1209 05:55:59.316356 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:55:59.316365 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:55:59.316376 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:55:59.383644 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:55:59.373968   11152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:59.374816   11152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:59.376482   11152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:59.376830   11152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:59.378974   11152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:55:59.373968   11152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:59.374816   11152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:59.376482   11152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:59.376830   11152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:59.378974   11152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:55:59.383666 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:55:59.383680 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:55:59.409556 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:55:59.409591 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:55:59.440707 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:55:59.440737 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:55:59.496851 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:55:59.496887 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:56:02.013397 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:56:02.023815 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:56:02.023883 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:56:02.054212 1437114 cri.go:89] found id: ""
	I1209 05:56:02.054240 1437114 logs.go:282] 0 containers: []
	W1209 05:56:02.054249 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:56:02.054255 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:56:02.054323 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:56:02.079245 1437114 cri.go:89] found id: ""
	I1209 05:56:02.079274 1437114 logs.go:282] 0 containers: []
	W1209 05:56:02.079283 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:56:02.079289 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:56:02.079347 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:56:02.104356 1437114 cri.go:89] found id: ""
	I1209 05:56:02.104399 1437114 logs.go:282] 0 containers: []
	W1209 05:56:02.104408 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:56:02.104415 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:56:02.104478 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:56:02.129688 1437114 cri.go:89] found id: ""
	I1209 05:56:02.129753 1437114 logs.go:282] 0 containers: []
	W1209 05:56:02.129777 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:56:02.129795 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:56:02.129886 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:56:02.159435 1437114 cri.go:89] found id: ""
	I1209 05:56:02.159463 1437114 logs.go:282] 0 containers: []
	W1209 05:56:02.159471 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:56:02.159478 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:56:02.159537 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:56:02.193945 1437114 cri.go:89] found id: ""
	I1209 05:56:02.193969 1437114 logs.go:282] 0 containers: []
	W1209 05:56:02.193987 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:56:02.193994 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:56:02.194093 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:56:02.230499 1437114 cri.go:89] found id: ""
	I1209 05:56:02.230528 1437114 logs.go:282] 0 containers: []
	W1209 05:56:02.230542 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:56:02.230549 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:56:02.230650 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:56:02.261955 1437114 cri.go:89] found id: ""
	I1209 05:56:02.262021 1437114 logs.go:282] 0 containers: []
	W1209 05:56:02.262046 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:56:02.262063 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:56:02.262075 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:56:02.278208 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:56:02.278245 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:56:02.342511 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:56:02.334138   11267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:02.334823   11267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:02.336452   11267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:02.337017   11267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:02.338543   11267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:56:02.334138   11267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:02.334823   11267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:02.336452   11267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:02.337017   11267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:02.338543   11267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:56:02.342581 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:56:02.342603 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:56:02.367883 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:56:02.367920 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:56:02.398560 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:56:02.398587 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:56:04.956142 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:56:04.966664 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:56:04.966728 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:56:05.000769 1437114 cri.go:89] found id: ""
	I1209 05:56:05.000792 1437114 logs.go:282] 0 containers: []
	W1209 05:56:05.000801 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:56:05.000807 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:56:05.000868 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:56:05.030686 1437114 cri.go:89] found id: ""
	I1209 05:56:05.030713 1437114 logs.go:282] 0 containers: []
	W1209 05:56:05.030726 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:56:05.030733 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:56:05.030792 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:56:05.055515 1437114 cri.go:89] found id: ""
	I1209 05:56:05.055541 1437114 logs.go:282] 0 containers: []
	W1209 05:56:05.055550 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:56:05.055556 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:56:05.055614 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:56:05.080557 1437114 cri.go:89] found id: ""
	I1209 05:56:05.080584 1437114 logs.go:282] 0 containers: []
	W1209 05:56:05.080593 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:56:05.080599 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:56:05.080659 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:56:05.106686 1437114 cri.go:89] found id: ""
	I1209 05:56:05.106714 1437114 logs.go:282] 0 containers: []
	W1209 05:56:05.106724 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:56:05.106731 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:56:05.106792 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:56:05.131985 1437114 cri.go:89] found id: ""
	I1209 05:56:05.132044 1437114 logs.go:282] 0 containers: []
	W1209 05:56:05.132053 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:56:05.132060 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:56:05.132127 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:56:05.158936 1437114 cri.go:89] found id: ""
	I1209 05:56:05.159002 1437114 logs.go:282] 0 containers: []
	W1209 05:56:05.159027 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:56:05.159045 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:56:05.159134 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:56:05.186586 1437114 cri.go:89] found id: ""
	I1209 05:56:05.186658 1437114 logs.go:282] 0 containers: []
	W1209 05:56:05.186682 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:56:05.186704 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:56:05.186745 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:56:05.252531 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:56:05.252568 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:56:05.268794 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:56:05.268823 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:56:05.330847 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:56:05.322209   11379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:05.322901   11379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:05.324496   11379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:05.325041   11379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:05.326643   11379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:56:05.322209   11379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:05.322901   11379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:05.324496   11379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:05.325041   11379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:05.326643   11379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:56:05.330870 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:56:05.330882 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:56:05.356845 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:56:05.356877 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:56:07.894100 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:56:07.904726 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:56:07.904808 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:56:07.934685 1437114 cri.go:89] found id: ""
	I1209 05:56:07.934707 1437114 logs.go:282] 0 containers: []
	W1209 05:56:07.934715 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:56:07.934727 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:56:07.934786 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:56:07.966688 1437114 cri.go:89] found id: ""
	I1209 05:56:07.966715 1437114 logs.go:282] 0 containers: []
	W1209 05:56:07.966724 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:56:07.966730 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:56:07.966791 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:56:07.997688 1437114 cri.go:89] found id: ""
	I1209 05:56:07.997718 1437114 logs.go:282] 0 containers: []
	W1209 05:56:07.997727 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:56:07.997733 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:56:07.997794 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:56:08.028703 1437114 cri.go:89] found id: ""
	I1209 05:56:08.028738 1437114 logs.go:282] 0 containers: []
	W1209 05:56:08.028748 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:56:08.028756 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:56:08.028836 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:56:08.055186 1437114 cri.go:89] found id: ""
	I1209 05:56:08.055216 1437114 logs.go:282] 0 containers: []
	W1209 05:56:08.055225 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:56:08.055232 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:56:08.055298 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:56:08.081977 1437114 cri.go:89] found id: ""
	I1209 05:56:08.082005 1437114 logs.go:282] 0 containers: []
	W1209 05:56:08.082014 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:56:08.082020 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:56:08.082094 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:56:08.106694 1437114 cri.go:89] found id: ""
	I1209 05:56:08.106719 1437114 logs.go:282] 0 containers: []
	W1209 05:56:08.106728 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:56:08.106735 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:56:08.106794 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:56:08.131242 1437114 cri.go:89] found id: ""
	I1209 05:56:08.131266 1437114 logs.go:282] 0 containers: []
	W1209 05:56:08.131274 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:56:08.131284 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:56:08.131296 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:56:08.200236 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:56:08.191954   11490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:08.192809   11490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:08.194381   11490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:08.194676   11490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:08.196205   11490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:56:08.191954   11490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:08.192809   11490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:08.194381   11490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:08.194676   11490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:08.196205   11490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:56:08.200261 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:56:08.200275 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:56:08.228642 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:56:08.228684 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:56:08.262181 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:56:08.262210 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:56:08.316796 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:56:08.316828 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:56:10.832826 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:56:10.843625 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:56:10.843696 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:56:10.867741 1437114 cri.go:89] found id: ""
	I1209 05:56:10.867808 1437114 logs.go:282] 0 containers: []
	W1209 05:56:10.867832 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:56:10.867854 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:56:10.867940 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:56:10.893251 1437114 cri.go:89] found id: ""
	I1209 05:56:10.893284 1437114 logs.go:282] 0 containers: []
	W1209 05:56:10.893292 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:56:10.893298 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:56:10.893357 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:56:10.921874 1437114 cri.go:89] found id: ""
	I1209 05:56:10.921897 1437114 logs.go:282] 0 containers: []
	W1209 05:56:10.921906 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:56:10.921912 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:56:10.921977 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:56:10.948235 1437114 cri.go:89] found id: ""
	I1209 05:56:10.948257 1437114 logs.go:282] 0 containers: []
	W1209 05:56:10.948272 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:56:10.948279 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:56:10.948337 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:56:10.977204 1437114 cri.go:89] found id: ""
	I1209 05:56:10.977226 1437114 logs.go:282] 0 containers: []
	W1209 05:56:10.977234 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:56:10.977239 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:56:10.977298 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:56:11.011653 1437114 cri.go:89] found id: ""
	I1209 05:56:11.011677 1437114 logs.go:282] 0 containers: []
	W1209 05:56:11.011685 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:56:11.011692 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:56:11.011753 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:56:11.038552 1437114 cri.go:89] found id: ""
	I1209 05:56:11.038575 1437114 logs.go:282] 0 containers: []
	W1209 05:56:11.038584 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:56:11.038589 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:56:11.038648 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:56:11.068058 1437114 cri.go:89] found id: ""
	I1209 05:56:11.068081 1437114 logs.go:282] 0 containers: []
	W1209 05:56:11.068089 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:56:11.068098 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:56:11.068109 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:56:11.124172 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:56:11.124208 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:56:11.140275 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:56:11.140316 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:56:11.220317 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:56:11.212396   11608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:11.213001   11608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:11.214543   11608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:11.215016   11608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:11.216494   11608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:56:11.212396   11608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:11.213001   11608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:11.214543   11608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:11.215016   11608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:11.216494   11608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:56:11.220349 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:56:11.220362 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:56:11.245629 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:56:11.245662 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:56:13.776003 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:56:13.786369 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:56:13.786448 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:56:13.809520 1437114 cri.go:89] found id: ""
	I1209 05:56:13.809544 1437114 logs.go:282] 0 containers: []
	W1209 05:56:13.809553 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:56:13.809559 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:56:13.809618 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:56:13.833347 1437114 cri.go:89] found id: ""
	I1209 05:56:13.833370 1437114 logs.go:282] 0 containers: []
	W1209 05:56:13.833378 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:56:13.833384 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:56:13.833446 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:56:13.857799 1437114 cri.go:89] found id: ""
	I1209 05:56:13.857830 1437114 logs.go:282] 0 containers: []
	W1209 05:56:13.857840 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:56:13.857846 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:56:13.857906 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:56:13.882625 1437114 cri.go:89] found id: ""
	I1209 05:56:13.882658 1437114 logs.go:282] 0 containers: []
	W1209 05:56:13.882667 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:56:13.882673 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:56:13.882742 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:56:13.910846 1437114 cri.go:89] found id: ""
	I1209 05:56:13.910880 1437114 logs.go:282] 0 containers: []
	W1209 05:56:13.910889 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:56:13.910895 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:56:13.910962 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:56:13.942418 1437114 cri.go:89] found id: ""
	I1209 05:56:13.942483 1437114 logs.go:282] 0 containers: []
	W1209 05:56:13.942510 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:56:13.942528 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:56:13.942615 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:56:13.972617 1437114 cri.go:89] found id: ""
	I1209 05:56:13.972686 1437114 logs.go:282] 0 containers: []
	W1209 05:56:13.972710 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:56:13.972728 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:56:13.972814 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:56:14.010643 1437114 cri.go:89] found id: ""
	I1209 05:56:14.010672 1437114 logs.go:282] 0 containers: []
	W1209 05:56:14.010690 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:56:14.010712 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:56:14.010743 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:56:14.045403 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:56:14.045489 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:56:14.103757 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:56:14.103793 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:56:14.119622 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:56:14.119648 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:56:14.199726 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:56:14.184447   11733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:14.191243   11733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:14.191781   11733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:14.193355   11733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:14.193912   11733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:56:14.184447   11733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:14.191243   11733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:14.191781   11733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:14.193355   11733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:14.193912   11733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:56:14.199794 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:56:14.199821 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:56:16.729940 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:56:16.740423 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:56:16.740497 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:56:16.765729 1437114 cri.go:89] found id: ""
	I1209 05:56:16.765755 1437114 logs.go:282] 0 containers: []
	W1209 05:56:16.765763 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:56:16.765770 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:56:16.765831 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:56:16.793724 1437114 cri.go:89] found id: ""
	I1209 05:56:16.793750 1437114 logs.go:282] 0 containers: []
	W1209 05:56:16.793759 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:56:16.793765 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:56:16.793824 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:56:16.821402 1437114 cri.go:89] found id: ""
	I1209 05:56:16.821429 1437114 logs.go:282] 0 containers: []
	W1209 05:56:16.821437 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:56:16.821444 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:56:16.821504 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:56:16.846074 1437114 cri.go:89] found id: ""
	I1209 05:56:16.846101 1437114 logs.go:282] 0 containers: []
	W1209 05:56:16.846110 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:56:16.846116 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:56:16.846175 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:56:16.870665 1437114 cri.go:89] found id: ""
	I1209 05:56:16.870689 1437114 logs.go:282] 0 containers: []
	W1209 05:56:16.870698 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:56:16.870705 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:56:16.870785 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:56:16.894509 1437114 cri.go:89] found id: ""
	I1209 05:56:16.894542 1437114 logs.go:282] 0 containers: []
	W1209 05:56:16.894550 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:56:16.894557 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:56:16.894651 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:56:16.921935 1437114 cri.go:89] found id: ""
	I1209 05:56:16.921962 1437114 logs.go:282] 0 containers: []
	W1209 05:56:16.921971 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:56:16.921977 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:56:16.922049 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:56:16.950536 1437114 cri.go:89] found id: ""
	I1209 05:56:16.950570 1437114 logs.go:282] 0 containers: []
	W1209 05:56:16.950579 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:56:16.950588 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:56:16.950599 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:56:17.008406 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:56:17.008442 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:56:17.024072 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:56:17.024098 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:56:17.089436 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:56:17.080816   11835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:17.081612   11835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:17.083298   11835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:17.083818   11835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:17.085479   11835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:56:17.080816   11835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:17.081612   11835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:17.083298   11835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:17.083818   11835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:17.085479   11835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:56:17.089456 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:56:17.089468 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:56:17.114751 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:56:17.114785 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:56:19.649189 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:56:19.659355 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:56:19.659709 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:56:19.687359 1437114 cri.go:89] found id: ""
	I1209 05:56:19.687393 1437114 logs.go:282] 0 containers: []
	W1209 05:56:19.687402 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:56:19.687408 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:56:19.687482 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:56:19.711167 1437114 cri.go:89] found id: ""
	I1209 05:56:19.711241 1437114 logs.go:282] 0 containers: []
	W1209 05:56:19.711264 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:56:19.711282 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:56:19.711361 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:56:19.734776 1437114 cri.go:89] found id: ""
	I1209 05:56:19.734843 1437114 logs.go:282] 0 containers: []
	W1209 05:56:19.734868 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:56:19.734886 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:56:19.734978 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:56:19.758075 1437114 cri.go:89] found id: ""
	I1209 05:56:19.758101 1437114 logs.go:282] 0 containers: []
	W1209 05:56:19.758111 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:56:19.758117 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:56:19.758191 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:56:19.781866 1437114 cri.go:89] found id: ""
	I1209 05:56:19.781889 1437114 logs.go:282] 0 containers: []
	W1209 05:56:19.781897 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:56:19.781903 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:56:19.782011 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:56:19.806779 1437114 cri.go:89] found id: ""
	I1209 05:56:19.806811 1437114 logs.go:282] 0 containers: []
	W1209 05:56:19.806820 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:56:19.806827 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:56:19.806896 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:56:19.830892 1437114 cri.go:89] found id: ""
	I1209 05:56:19.830931 1437114 logs.go:282] 0 containers: []
	W1209 05:56:19.830940 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:56:19.830946 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:56:19.831013 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:56:19.855119 1437114 cri.go:89] found id: ""
	I1209 05:56:19.855151 1437114 logs.go:282] 0 containers: []
	W1209 05:56:19.855160 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:56:19.855168 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:56:19.855180 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:56:19.918437 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:56:19.910860   11934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:19.911393   11934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:19.912883   11934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:19.913323   11934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:19.914743   11934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:56:19.910860   11934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:19.911393   11934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:19.912883   11934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:19.913323   11934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:19.914743   11934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:56:19.918456 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:56:19.918468 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:56:19.948986 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:56:19.949022 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:56:19.983513 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:56:19.983543 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:56:20.044570 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:56:20.044611 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:56:22.561138 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:56:22.571631 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:56:22.571701 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:56:22.597481 1437114 cri.go:89] found id: ""
	I1209 05:56:22.597507 1437114 logs.go:282] 0 containers: []
	W1209 05:56:22.597516 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:56:22.597522 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:56:22.597583 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:56:22.620910 1437114 cri.go:89] found id: ""
	I1209 05:56:22.620934 1437114 logs.go:282] 0 containers: []
	W1209 05:56:22.620942 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:56:22.620948 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:56:22.621010 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:56:22.645762 1437114 cri.go:89] found id: ""
	I1209 05:56:22.645786 1437114 logs.go:282] 0 containers: []
	W1209 05:56:22.645794 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:56:22.645802 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:56:22.645860 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:56:22.674030 1437114 cri.go:89] found id: ""
	I1209 05:56:22.674055 1437114 logs.go:282] 0 containers: []
	W1209 05:56:22.674063 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:56:22.674069 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:56:22.674129 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:56:22.697420 1437114 cri.go:89] found id: ""
	I1209 05:56:22.697483 1437114 logs.go:282] 0 containers: []
	W1209 05:56:22.697498 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:56:22.697505 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:56:22.697572 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:56:22.721275 1437114 cri.go:89] found id: ""
	I1209 05:56:22.721303 1437114 logs.go:282] 0 containers: []
	W1209 05:56:22.721311 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:56:22.721318 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:56:22.721375 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:56:22.751174 1437114 cri.go:89] found id: ""
	I1209 05:56:22.751207 1437114 logs.go:282] 0 containers: []
	W1209 05:56:22.751216 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:56:22.751223 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:56:22.751297 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:56:22.783334 1437114 cri.go:89] found id: ""
	I1209 05:56:22.783359 1437114 logs.go:282] 0 containers: []
	W1209 05:56:22.783368 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:56:22.783377 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:56:22.783388 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:56:22.798903 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:56:22.798931 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:56:22.863930 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:56:22.855168   12050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:22.855903   12050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:22.857473   12050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:22.858541   12050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:22.859308   12050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:56:22.855168   12050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:22.855903   12050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:22.857473   12050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:22.858541   12050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:22.859308   12050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:56:22.863951 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:56:22.863964 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:56:22.889010 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:56:22.889044 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:56:22.917472 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:56:22.917497 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:56:25.477751 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:56:25.488155 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:56:25.488227 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:56:25.513691 1437114 cri.go:89] found id: ""
	I1209 05:56:25.513726 1437114 logs.go:282] 0 containers: []
	W1209 05:56:25.513735 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:56:25.513742 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:56:25.513815 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:56:25.538394 1437114 cri.go:89] found id: ""
	I1209 05:56:25.538426 1437114 logs.go:282] 0 containers: []
	W1209 05:56:25.538434 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:56:25.538441 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:56:25.538507 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:56:25.565992 1437114 cri.go:89] found id: ""
	I1209 05:56:25.566014 1437114 logs.go:282] 0 containers: []
	W1209 05:56:25.566023 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:56:25.566028 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:56:25.566084 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:56:25.594238 1437114 cri.go:89] found id: ""
	I1209 05:56:25.594273 1437114 logs.go:282] 0 containers: []
	W1209 05:56:25.594283 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:56:25.594289 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:56:25.594357 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:56:25.618528 1437114 cri.go:89] found id: ""
	I1209 05:56:25.618554 1437114 logs.go:282] 0 containers: []
	W1209 05:56:25.618562 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:56:25.618569 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:56:25.618630 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:56:25.645761 1437114 cri.go:89] found id: ""
	I1209 05:56:25.645793 1437114 logs.go:282] 0 containers: []
	W1209 05:56:25.645802 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:56:25.645809 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:56:25.645868 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:56:25.673275 1437114 cri.go:89] found id: ""
	I1209 05:56:25.673303 1437114 logs.go:282] 0 containers: []
	W1209 05:56:25.673313 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:56:25.673320 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:56:25.673378 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:56:25.698776 1437114 cri.go:89] found id: ""
	I1209 05:56:25.698801 1437114 logs.go:282] 0 containers: []
	W1209 05:56:25.698810 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:56:25.698819 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:56:25.698831 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:56:25.758726 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:56:25.758763 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:56:25.774459 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:56:25.774498 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:56:25.837634 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:56:25.829791   12165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:25.830357   12165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:25.831894   12165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:25.832310   12165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:25.833747   12165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:56:25.829791   12165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:25.830357   12165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:25.831894   12165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:25.832310   12165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:25.833747   12165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:56:25.837654 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:56:25.837666 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:56:25.863059 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:56:25.863089 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:56:28.390209 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:56:28.400783 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:56:28.400858 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:56:28.431158 1437114 cri.go:89] found id: ""
	I1209 05:56:28.431186 1437114 logs.go:282] 0 containers: []
	W1209 05:56:28.431195 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:56:28.431201 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:56:28.431257 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:56:28.466252 1437114 cri.go:89] found id: ""
	I1209 05:56:28.466304 1437114 logs.go:282] 0 containers: []
	W1209 05:56:28.466313 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:56:28.466319 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:56:28.466387 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:56:28.495101 1437114 cri.go:89] found id: ""
	I1209 05:56:28.495128 1437114 logs.go:282] 0 containers: []
	W1209 05:56:28.495135 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:56:28.495141 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:56:28.495205 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:56:28.519814 1437114 cri.go:89] found id: ""
	I1209 05:56:28.519840 1437114 logs.go:282] 0 containers: []
	W1209 05:56:28.519848 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:56:28.519854 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:56:28.519917 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:56:28.545987 1437114 cri.go:89] found id: ""
	I1209 05:56:28.546014 1437114 logs.go:282] 0 containers: []
	W1209 05:56:28.546022 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:56:28.546029 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:56:28.546087 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:56:28.569653 1437114 cri.go:89] found id: ""
	I1209 05:56:28.569677 1437114 logs.go:282] 0 containers: []
	W1209 05:56:28.569686 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:56:28.569693 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:56:28.569750 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:56:28.592506 1437114 cri.go:89] found id: ""
	I1209 05:56:28.592531 1437114 logs.go:282] 0 containers: []
	W1209 05:56:28.592540 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:56:28.592546 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:56:28.592603 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:56:28.616082 1437114 cri.go:89] found id: ""
	I1209 05:56:28.616109 1437114 logs.go:282] 0 containers: []
	W1209 05:56:28.616118 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:56:28.616127 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:56:28.616140 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:56:28.641671 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:56:28.641702 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:56:28.667950 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:56:28.667976 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:56:28.723545 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:56:28.723579 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:56:28.739105 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:56:28.739133 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:56:28.799453 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:56:28.791383   12291 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:28.792152   12291 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:28.793337   12291 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:28.793895   12291 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:28.795399   12291 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:56:28.791383   12291 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:28.792152   12291 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:28.793337   12291 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:28.793895   12291 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:28.795399   12291 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:56:31.300174 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:56:31.310601 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:56:31.310671 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:56:31.335264 1437114 cri.go:89] found id: ""
	I1209 05:56:31.335286 1437114 logs.go:282] 0 containers: []
	W1209 05:56:31.335295 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:56:31.335301 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:56:31.335359 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:56:31.359354 1437114 cri.go:89] found id: ""
	I1209 05:56:31.359377 1437114 logs.go:282] 0 containers: []
	W1209 05:56:31.359386 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:56:31.359392 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:56:31.359451 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:56:31.385360 1437114 cri.go:89] found id: ""
	I1209 05:56:31.385383 1437114 logs.go:282] 0 containers: []
	W1209 05:56:31.385392 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:56:31.385398 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:56:31.385463 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:56:31.410224 1437114 cri.go:89] found id: ""
	I1209 05:56:31.410250 1437114 logs.go:282] 0 containers: []
	W1209 05:56:31.410258 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:56:31.410265 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:56:31.410359 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:56:31.451992 1437114 cri.go:89] found id: ""
	I1209 05:56:31.452040 1437114 logs.go:282] 0 containers: []
	W1209 05:56:31.452049 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:56:31.452056 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:56:31.452116 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:56:31.484950 1437114 cri.go:89] found id: ""
	I1209 05:56:31.484979 1437114 logs.go:282] 0 containers: []
	W1209 05:56:31.484987 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:56:31.484994 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:56:31.485052 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:56:31.518900 1437114 cri.go:89] found id: ""
	I1209 05:56:31.518929 1437114 logs.go:282] 0 containers: []
	W1209 05:56:31.518938 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:56:31.518944 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:56:31.519004 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:56:31.542368 1437114 cri.go:89] found id: ""
	I1209 05:56:31.542398 1437114 logs.go:282] 0 containers: []
	W1209 05:56:31.542406 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:56:31.542414 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:56:31.542426 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:56:31.597391 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:56:31.597426 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:56:31.613542 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:56:31.613568 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:56:31.675768 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:56:31.667793   12388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:31.668366   12388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:31.670085   12388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:31.670523   12388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:31.672049   12388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:56:31.667793   12388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:31.668366   12388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:31.670085   12388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:31.670523   12388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:31.672049   12388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:56:31.675790 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:56:31.675801 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:56:31.705823 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:56:31.705860 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:56:34.233697 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:56:34.244491 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:56:34.244562 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:56:34.269357 1437114 cri.go:89] found id: ""
	I1209 05:56:34.269382 1437114 logs.go:282] 0 containers: []
	W1209 05:56:34.269393 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:56:34.269399 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:56:34.269455 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:56:34.298358 1437114 cri.go:89] found id: ""
	I1209 05:56:34.298389 1437114 logs.go:282] 0 containers: []
	W1209 05:56:34.298398 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:56:34.298404 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:56:34.298463 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:56:34.323280 1437114 cri.go:89] found id: ""
	I1209 05:56:34.323301 1437114 logs.go:282] 0 containers: []
	W1209 05:56:34.323309 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:56:34.323315 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:56:34.323372 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:56:34.347068 1437114 cri.go:89] found id: ""
	I1209 05:56:34.347144 1437114 logs.go:282] 0 containers: []
	W1209 05:56:34.347166 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:56:34.347185 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:56:34.347268 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:56:34.370494 1437114 cri.go:89] found id: ""
	I1209 05:56:34.370519 1437114 logs.go:282] 0 containers: []
	W1209 05:56:34.370528 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:56:34.370534 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:56:34.370593 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:56:34.394561 1437114 cri.go:89] found id: ""
	I1209 05:56:34.394586 1437114 logs.go:282] 0 containers: []
	W1209 05:56:34.394594 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:56:34.394601 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:56:34.394665 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:56:34.418680 1437114 cri.go:89] found id: ""
	I1209 05:56:34.418708 1437114 logs.go:282] 0 containers: []
	W1209 05:56:34.418717 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:56:34.418723 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:56:34.418781 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:56:34.456783 1437114 cri.go:89] found id: ""
	I1209 05:56:34.456811 1437114 logs.go:282] 0 containers: []
	W1209 05:56:34.456819 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:56:34.456828 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:56:34.456839 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:56:34.520119 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:56:34.520160 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:56:34.536245 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:56:34.536271 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:56:34.598782 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:56:34.590200   12497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:34.590688   12497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:34.592324   12497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:34.592957   12497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:34.593923   12497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:56:34.590200   12497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:34.590688   12497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:34.592324   12497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:34.592957   12497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:34.593923   12497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:56:34.598802 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:56:34.598813 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:56:34.623426 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:56:34.623456 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:56:37.156294 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:56:37.167303 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:56:37.167376 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:56:37.213639 1437114 cri.go:89] found id: ""
	I1209 05:56:37.213661 1437114 logs.go:282] 0 containers: []
	W1209 05:56:37.213670 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:56:37.213676 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:56:37.213734 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:56:37.251381 1437114 cri.go:89] found id: ""
	I1209 05:56:37.251451 1437114 logs.go:282] 0 containers: []
	W1209 05:56:37.251472 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:56:37.251489 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:56:37.251577 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:56:37.276652 1437114 cri.go:89] found id: ""
	I1209 05:56:37.276683 1437114 logs.go:282] 0 containers: []
	W1209 05:56:37.276718 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:56:37.276730 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:56:37.276807 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:56:37.306291 1437114 cri.go:89] found id: ""
	I1209 05:56:37.306355 1437114 logs.go:282] 0 containers: []
	W1209 05:56:37.306378 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:56:37.306397 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:56:37.306480 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:56:37.330690 1437114 cri.go:89] found id: ""
	I1209 05:56:37.330761 1437114 logs.go:282] 0 containers: []
	W1209 05:56:37.330784 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:56:37.330803 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:56:37.330891 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:56:37.360974 1437114 cri.go:89] found id: ""
	I1209 05:56:37.360996 1437114 logs.go:282] 0 containers: []
	W1209 05:56:37.361005 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:56:37.361011 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:56:37.361067 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:56:37.385070 1437114 cri.go:89] found id: ""
	I1209 05:56:37.385134 1437114 logs.go:282] 0 containers: []
	W1209 05:56:37.385149 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:56:37.385157 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:56:37.385214 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:56:37.408838 1437114 cri.go:89] found id: ""
	I1209 05:56:37.408872 1437114 logs.go:282] 0 containers: []
	W1209 05:56:37.408881 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:56:37.408890 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:56:37.408904 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:56:37.470471 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:56:37.470552 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:56:37.490560 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:56:37.490636 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:56:37.566595 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:56:37.558458   12607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:37.559098   12607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:37.560793   12607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:37.561249   12607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:37.562761   12607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:56:37.558458   12607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:37.559098   12607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:37.560793   12607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:37.561249   12607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:37.562761   12607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:56:37.566616 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:56:37.566629 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:56:37.591926 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:56:37.591966 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:56:40.120818 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:56:40.132357 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:56:40.132434 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:56:40.159057 1437114 cri.go:89] found id: ""
	I1209 05:56:40.159127 1437114 logs.go:282] 0 containers: []
	W1209 05:56:40.159150 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:56:40.159172 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:56:40.159260 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:56:40.194739 1437114 cri.go:89] found id: ""
	I1209 05:56:40.194762 1437114 logs.go:282] 0 containers: []
	W1209 05:56:40.194770 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:56:40.194777 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:56:40.194842 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:56:40.229613 1437114 cri.go:89] found id: ""
	I1209 05:56:40.229642 1437114 logs.go:282] 0 containers: []
	W1209 05:56:40.229651 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:56:40.229657 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:56:40.229720 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:56:40.266599 1437114 cri.go:89] found id: ""
	I1209 05:56:40.266622 1437114 logs.go:282] 0 containers: []
	W1209 05:56:40.266631 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:56:40.266643 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:56:40.266705 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:56:40.293941 1437114 cri.go:89] found id: ""
	I1209 05:56:40.293964 1437114 logs.go:282] 0 containers: []
	W1209 05:56:40.293973 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:56:40.293979 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:56:40.294037 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:56:40.319374 1437114 cri.go:89] found id: ""
	I1209 05:56:40.319407 1437114 logs.go:282] 0 containers: []
	W1209 05:56:40.319416 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:56:40.319423 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:56:40.319497 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:56:40.344221 1437114 cri.go:89] found id: ""
	I1209 05:56:40.344254 1437114 logs.go:282] 0 containers: []
	W1209 05:56:40.344263 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:56:40.344268 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:56:40.344333 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:56:40.369033 1437114 cri.go:89] found id: ""
	I1209 05:56:40.369056 1437114 logs.go:282] 0 containers: []
	W1209 05:56:40.369066 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:56:40.369076 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:56:40.369088 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:56:40.398480 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:56:40.398506 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:56:40.454913 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:56:40.454992 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:56:40.471549 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:56:40.471617 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:56:40.537419 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:56:40.529111   12732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:40.529745   12732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:40.531419   12732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:40.532052   12732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:40.533493   12732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:56:40.529111   12732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:40.529745   12732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:40.531419   12732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:40.532052   12732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:40.533493   12732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:56:40.537440 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:56:40.537452 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:56:43.063560 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:56:43.074056 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:56:43.074128 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:56:43.098443 1437114 cri.go:89] found id: ""
	I1209 05:56:43.098467 1437114 logs.go:282] 0 containers: []
	W1209 05:56:43.098476 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:56:43.098483 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:56:43.098543 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:56:43.123378 1437114 cri.go:89] found id: ""
	I1209 05:56:43.123405 1437114 logs.go:282] 0 containers: []
	W1209 05:56:43.123414 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:56:43.123420 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:56:43.123483 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:56:43.152283 1437114 cri.go:89] found id: ""
	I1209 05:56:43.152313 1437114 logs.go:282] 0 containers: []
	W1209 05:56:43.152322 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:56:43.152329 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:56:43.152389 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:56:43.176720 1437114 cri.go:89] found id: ""
	I1209 05:56:43.176744 1437114 logs.go:282] 0 containers: []
	W1209 05:56:43.176752 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:56:43.176759 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:56:43.176816 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:56:43.202038 1437114 cri.go:89] found id: ""
	I1209 05:56:43.202066 1437114 logs.go:282] 0 containers: []
	W1209 05:56:43.202074 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:56:43.202081 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:56:43.202136 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:56:43.231595 1437114 cri.go:89] found id: ""
	I1209 05:56:43.231620 1437114 logs.go:282] 0 containers: []
	W1209 05:56:43.231629 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:56:43.231636 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:56:43.231693 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:56:43.261330 1437114 cri.go:89] found id: ""
	I1209 05:56:43.261351 1437114 logs.go:282] 0 containers: []
	W1209 05:56:43.261359 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:56:43.261365 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:56:43.261422 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:56:43.290154 1437114 cri.go:89] found id: ""
	I1209 05:56:43.290175 1437114 logs.go:282] 0 containers: []
	W1209 05:56:43.290183 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:56:43.290192 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:56:43.290204 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:56:43.318398 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:56:43.318424 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:56:43.377076 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:56:43.377112 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:56:43.392846 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:56:43.392877 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:56:43.468351 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:56:43.457690   12848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:43.458459   12848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:43.460248   12848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:43.460927   12848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:43.462463   12848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:56:43.457690   12848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:43.458459   12848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:43.460248   12848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:43.460927   12848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:43.462463   12848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:56:43.468373 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:56:43.468384 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:56:46.000301 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:56:46.013622 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:56:46.013695 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:56:46.043040 1437114 cri.go:89] found id: ""
	I1209 05:56:46.043066 1437114 logs.go:282] 0 containers: []
	W1209 05:56:46.043074 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:56:46.043081 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:56:46.043164 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:56:46.073486 1437114 cri.go:89] found id: ""
	I1209 05:56:46.073512 1437114 logs.go:282] 0 containers: []
	W1209 05:56:46.073521 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:56:46.073529 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:56:46.073593 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:56:46.099148 1437114 cri.go:89] found id: ""
	I1209 05:56:46.099175 1437114 logs.go:282] 0 containers: []
	W1209 05:56:46.099185 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:56:46.099193 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:56:46.099252 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:56:46.123167 1437114 cri.go:89] found id: ""
	I1209 05:56:46.123191 1437114 logs.go:282] 0 containers: []
	W1209 05:56:46.123200 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:56:46.123207 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:56:46.123271 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:56:46.151973 1437114 cri.go:89] found id: ""
	I1209 05:56:46.151999 1437114 logs.go:282] 0 containers: []
	W1209 05:56:46.152008 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:56:46.152035 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:56:46.152098 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:56:46.177766 1437114 cri.go:89] found id: ""
	I1209 05:56:46.177798 1437114 logs.go:282] 0 containers: []
	W1209 05:56:46.177807 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:56:46.177813 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:56:46.177871 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:56:46.206986 1437114 cri.go:89] found id: ""
	I1209 05:56:46.207008 1437114 logs.go:282] 0 containers: []
	W1209 05:56:46.207017 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:56:46.207023 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:56:46.207081 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:56:46.233946 1437114 cri.go:89] found id: ""
	I1209 05:56:46.233968 1437114 logs.go:282] 0 containers: []
	W1209 05:56:46.233977 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:56:46.233986 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:56:46.233997 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:56:46.298127 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:56:46.289387   12939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:46.289949   12939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:46.291474   12939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:46.292041   12939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:46.293829   12939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:56:46.289387   12939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:46.289949   12939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:46.291474   12939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:46.292041   12939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:46.293829   12939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:56:46.298150 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:56:46.298162 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:56:46.323208 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:56:46.323239 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:56:46.355077 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:56:46.355106 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:56:46.410415 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:56:46.410452 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:56:48.926721 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:56:48.937257 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:56:48.937332 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:56:48.961648 1437114 cri.go:89] found id: ""
	I1209 05:56:48.961676 1437114 logs.go:282] 0 containers: []
	W1209 05:56:48.961685 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:56:48.961698 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:56:48.961758 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:56:48.989144 1437114 cri.go:89] found id: ""
	I1209 05:56:48.989169 1437114 logs.go:282] 0 containers: []
	W1209 05:56:48.989178 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:56:48.989184 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:56:48.989240 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:56:49.014588 1437114 cri.go:89] found id: ""
	I1209 05:56:49.014613 1437114 logs.go:282] 0 containers: []
	W1209 05:56:49.014622 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:56:49.014628 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:56:49.014691 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:56:49.038311 1437114 cri.go:89] found id: ""
	I1209 05:56:49.038339 1437114 logs.go:282] 0 containers: []
	W1209 05:56:49.038349 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:56:49.038355 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:56:49.038414 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:56:49.062714 1437114 cri.go:89] found id: ""
	I1209 05:56:49.062740 1437114 logs.go:282] 0 containers: []
	W1209 05:56:49.062748 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:56:49.062754 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:56:49.062814 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:56:49.089769 1437114 cri.go:89] found id: ""
	I1209 05:56:49.089798 1437114 logs.go:282] 0 containers: []
	W1209 05:56:49.089807 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:56:49.089815 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:56:49.089892 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:56:49.118456 1437114 cri.go:89] found id: ""
	I1209 05:56:49.118477 1437114 logs.go:282] 0 containers: []
	W1209 05:56:49.118486 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:56:49.118492 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:56:49.118548 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:56:49.146213 1437114 cri.go:89] found id: ""
	I1209 05:56:49.146241 1437114 logs.go:282] 0 containers: []
	W1209 05:56:49.146260 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:56:49.146286 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:56:49.146304 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:56:49.171755 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:56:49.171792 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:56:49.210632 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:56:49.210700 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:56:49.274853 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:56:49.274890 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:56:49.290746 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:56:49.290774 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:56:49.352595 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:56:49.344509   13068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:49.345192   13068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:49.346929   13068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:49.347389   13068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:49.348821   13068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:56:49.344509   13068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:49.345192   13068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:49.346929   13068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:49.347389   13068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:49.348821   13068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:56:51.854276 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:56:51.864787 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:56:51.864868 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:56:51.888399 1437114 cri.go:89] found id: ""
	I1209 05:56:51.888422 1437114 logs.go:282] 0 containers: []
	W1209 05:56:51.888431 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:56:51.888437 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:56:51.888499 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:56:51.913838 1437114 cri.go:89] found id: ""
	I1209 05:56:51.913865 1437114 logs.go:282] 0 containers: []
	W1209 05:56:51.913873 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:56:51.913880 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:56:51.913961 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:56:51.938727 1437114 cri.go:89] found id: ""
	I1209 05:56:51.938768 1437114 logs.go:282] 0 containers: []
	W1209 05:56:51.938794 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:56:51.938811 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:56:51.938885 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:56:51.964549 1437114 cri.go:89] found id: ""
	I1209 05:56:51.964576 1437114 logs.go:282] 0 containers: []
	W1209 05:56:51.964584 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:56:51.964590 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:56:51.964689 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:56:51.988777 1437114 cri.go:89] found id: ""
	I1209 05:56:51.988806 1437114 logs.go:282] 0 containers: []
	W1209 05:56:51.988815 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:56:51.988821 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:56:51.988908 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:56:52.017110 1437114 cri.go:89] found id: ""
	I1209 05:56:52.017138 1437114 logs.go:282] 0 containers: []
	W1209 05:56:52.017147 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:56:52.017154 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:56:52.017219 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:56:52.043184 1437114 cri.go:89] found id: ""
	I1209 05:56:52.043211 1437114 logs.go:282] 0 containers: []
	W1209 05:56:52.043219 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:56:52.043225 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:56:52.043293 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:56:52.068591 1437114 cri.go:89] found id: ""
	I1209 05:56:52.068617 1437114 logs.go:282] 0 containers: []
	W1209 05:56:52.068626 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:56:52.068636 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:56:52.068652 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:56:52.135805 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:56:52.127242   13157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:52.127996   13157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:52.129698   13157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:52.130086   13157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:52.131645   13157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:56:52.127242   13157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:52.127996   13157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:52.129698   13157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:52.130086   13157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:52.131645   13157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:56:52.135824 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:56:52.135837 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:56:52.160848 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:56:52.160884 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:56:52.206902 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:56:52.206930 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:56:52.269206 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:56:52.269242 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:56:54.786534 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:56:54.796870 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:56:54.796942 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:56:54.820891 1437114 cri.go:89] found id: ""
	I1209 05:56:54.820912 1437114 logs.go:282] 0 containers: []
	W1209 05:56:54.820920 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:56:54.820926 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:56:54.820983 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:56:54.844219 1437114 cri.go:89] found id: ""
	I1209 05:56:54.844243 1437114 logs.go:282] 0 containers: []
	W1209 05:56:54.844251 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:56:54.844257 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:56:54.844314 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:56:54.867467 1437114 cri.go:89] found id: ""
	I1209 05:56:54.867540 1437114 logs.go:282] 0 containers: []
	W1209 05:56:54.867564 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:56:54.867585 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:56:54.867678 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:56:54.891985 1437114 cri.go:89] found id: ""
	I1209 05:56:54.892007 1437114 logs.go:282] 0 containers: []
	W1209 05:56:54.892053 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:56:54.892060 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:56:54.892135 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:56:54.915079 1437114 cri.go:89] found id: ""
	I1209 05:56:54.915104 1437114 logs.go:282] 0 containers: []
	W1209 05:56:54.915112 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:56:54.915119 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:56:54.915175 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:56:54.941729 1437114 cri.go:89] found id: ""
	I1209 05:56:54.941768 1437114 logs.go:282] 0 containers: []
	W1209 05:56:54.941776 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:56:54.941783 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:56:54.941840 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:56:54.970033 1437114 cri.go:89] found id: ""
	I1209 05:56:54.970058 1437114 logs.go:282] 0 containers: []
	W1209 05:56:54.970066 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:56:54.970072 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:56:54.970134 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:56:55.004188 1437114 cri.go:89] found id: ""
	I1209 05:56:55.004230 1437114 logs.go:282] 0 containers: []
	W1209 05:56:55.004240 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:56:55.004250 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:56:55.004264 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:56:55.034996 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:56:55.035025 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:56:55.091574 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:56:55.091610 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:56:55.108302 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:56:55.108331 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:56:55.172944 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:56:55.163616   13287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:55.164399   13287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:55.166155   13287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:55.166466   13287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:55.168546   13287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:56:55.163616   13287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:55.164399   13287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:55.166155   13287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:55.166466   13287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:55.168546   13287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:56:55.172964 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:56:55.172985 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:56:57.700005 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:56:57.714279 1437114 out.go:203] 
	W1209 05:56:57.717113 1437114 out.go:285] X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1209 05:56:57.717154 1437114 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	* Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1209 05:56:57.717169 1437114 out.go:285] * Related issues:
	* Related issues:
	W1209 05:56:57.717186 1437114 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	  - https://github.com/kubernetes/minikube/issues/4536
	W1209 05:56:57.717204 1437114 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	  - https://github.com/kubernetes/minikube/issues/6014
	I1209 05:56:57.720208 1437114 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p newest-cni-262540 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0": exit status 105
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/SecondStart]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/SecondStart]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-262540
helpers_test.go:243: (dbg) docker inspect newest-cni-262540:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ed3de5d59c962edb9b6bef3201cdaec7fe174aa520c616b7fefbc3014b60f5d7",
	        "Created": "2025-12-09T05:40:46.656747886Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1437242,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-09T05:50:48.635687357Z",
	            "FinishedAt": "2025-12-09T05:50:47.310180166Z"
	        },
	        "Image": "sha256:e4eb91ed18a24161fce60c7cdd660144ecd5b8c5029dc2dea2c5e423c2f48ce4",
	        "ResolvConfPath": "/var/lib/docker/containers/ed3de5d59c962edb9b6bef3201cdaec7fe174aa520c616b7fefbc3014b60f5d7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ed3de5d59c962edb9b6bef3201cdaec7fe174aa520c616b7fefbc3014b60f5d7/hostname",
	        "HostsPath": "/var/lib/docker/containers/ed3de5d59c962edb9b6bef3201cdaec7fe174aa520c616b7fefbc3014b60f5d7/hosts",
	        "LogPath": "/var/lib/docker/containers/ed3de5d59c962edb9b6bef3201cdaec7fe174aa520c616b7fefbc3014b60f5d7/ed3de5d59c962edb9b6bef3201cdaec7fe174aa520c616b7fefbc3014b60f5d7-json.log",
	        "Name": "/newest-cni-262540",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-262540:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-262540",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ed3de5d59c962edb9b6bef3201cdaec7fe174aa520c616b7fefbc3014b60f5d7",
	                "LowerDir": "/var/lib/docker/overlay2/6415c7c451ba9f1315edabee60b733d482ef79cadbd27911dea3809654b6c1ec-init/diff:/var/lib/docker/overlay2/c44bb57aa59cc265266f37f2bb6e7ec0e7d641c3b4aeaa57e6d23deec6f0d1d4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6415c7c451ba9f1315edabee60b733d482ef79cadbd27911dea3809654b6c1ec/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6415c7c451ba9f1315edabee60b733d482ef79cadbd27911dea3809654b6c1ec/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6415c7c451ba9f1315edabee60b733d482ef79cadbd27911dea3809654b6c1ec/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-262540",
	                "Source": "/var/lib/docker/volumes/newest-cni-262540/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-262540",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-262540",
	                "name.minikube.sigs.k8s.io": "newest-cni-262540",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5ef6b7780104cfde91a86dd0f42d780a7d42fd9d965a232761225f3bafa31a2e",
	            "SandboxKey": "/var/run/docker/netns/5ef6b7780104",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34215"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34216"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34219"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34217"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34218"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-262540": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "92:d2:57:f6:4e:32",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "aa89e26051ba524ceb1352e47e7602df84b3dfd74bbc435c72069a1036fceebf",
	                    "EndpointID": "79808c0b2bead60a0d6333b887aa13d7b302f422db688969b287245b73727791",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-262540",
	                        "ed3de5d59c96"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-262540 -n newest-cni-262540
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-262540 -n newest-cni-262540: exit status 2 (324.640836ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/SecondStart]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-262540 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-262540 logs -n 25: (1.740712873s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─
────────────────────┐
	│ COMMAND │                                                                                                                            ARGS                                                                                                                            │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─
────────────────────┤
	│ start   │ -p embed-certs-432108 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-432108           │ jenkins │ v1.37.0 │ 09 Dec 25 05:36 UTC │ 09 Dec 25 05:37 UTC │
	│ image   │ embed-certs-432108 image list --format=json                                                                                                                                                                                                                │ embed-certs-432108           │ jenkins │ v1.37.0 │ 09 Dec 25 05:37 UTC │ 09 Dec 25 05:37 UTC │
	│ pause   │ -p embed-certs-432108 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-432108           │ jenkins │ v1.37.0 │ 09 Dec 25 05:37 UTC │ 09 Dec 25 05:37 UTC │
	│ unpause │ -p embed-certs-432108 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-432108           │ jenkins │ v1.37.0 │ 09 Dec 25 05:37 UTC │ 09 Dec 25 05:37 UTC │
	│ delete  │ -p embed-certs-432108                                                                                                                                                                                                                                      │ embed-certs-432108           │ jenkins │ v1.37.0 │ 09 Dec 25 05:37 UTC │ 09 Dec 25 05:37 UTC │
	│ delete  │ -p embed-certs-432108                                                                                                                                                                                                                                      │ embed-certs-432108           │ jenkins │ v1.37.0 │ 09 Dec 25 05:37 UTC │ 09 Dec 25 05:37 UTC │
	│ start   │ -p default-k8s-diff-port-564611 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-564611 │ jenkins │ v1.37.0 │ 09 Dec 25 05:37 UTC │ 09 Dec 25 05:39 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-564611 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                         │ default-k8s-diff-port-564611 │ jenkins │ v1.37.0 │ 09 Dec 25 05:39 UTC │ 09 Dec 25 05:39 UTC │
	│ stop    │ -p default-k8s-diff-port-564611 --alsologtostderr -v=3                                                                                                                                                                                                     │ default-k8s-diff-port-564611 │ jenkins │ v1.37.0 │ 09 Dec 25 05:39 UTC │ 09 Dec 25 05:39 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-564611 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ default-k8s-diff-port-564611 │ jenkins │ v1.37.0 │ 09 Dec 25 05:39 UTC │ 09 Dec 25 05:39 UTC │
	│ start   │ -p default-k8s-diff-port-564611 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-564611 │ jenkins │ v1.37.0 │ 09 Dec 25 05:39 UTC │ 09 Dec 25 05:40 UTC │
	│ image   │ default-k8s-diff-port-564611 image list --format=json                                                                                                                                                                                                      │ default-k8s-diff-port-564611 │ jenkins │ v1.37.0 │ 09 Dec 25 05:40 UTC │ 09 Dec 25 05:40 UTC │
	│ pause   │ -p default-k8s-diff-port-564611 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-564611 │ jenkins │ v1.37.0 │ 09 Dec 25 05:40 UTC │ 09 Dec 25 05:40 UTC │
	│ unpause │ -p default-k8s-diff-port-564611 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-564611 │ jenkins │ v1.37.0 │ 09 Dec 25 05:40 UTC │ 09 Dec 25 05:40 UTC │
	│ delete  │ -p default-k8s-diff-port-564611                                                                                                                                                                                                                            │ default-k8s-diff-port-564611 │ jenkins │ v1.37.0 │ 09 Dec 25 05:40 UTC │ 09 Dec 25 05:40 UTC │
	│ delete  │ -p default-k8s-diff-port-564611                                                                                                                                                                                                                            │ default-k8s-diff-port-564611 │ jenkins │ v1.37.0 │ 09 Dec 25 05:40 UTC │ 09 Dec 25 05:40 UTC │
	│ start   │ -p newest-cni-262540 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-262540            │ jenkins │ v1.37.0 │ 09 Dec 25 05:40 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-842269 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ no-preload-842269            │ jenkins │ v1.37.0 │ 09 Dec 25 05:43 UTC │                     │
	│ stop    │ -p no-preload-842269 --alsologtostderr -v=3                                                                                                                                                                                                                │ no-preload-842269            │ jenkins │ v1.37.0 │ 09 Dec 25 05:45 UTC │ 09 Dec 25 05:45 UTC │
	│ addons  │ enable dashboard -p no-preload-842269 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                               │ no-preload-842269            │ jenkins │ v1.37.0 │ 09 Dec 25 05:45 UTC │ 09 Dec 25 05:45 UTC │
	│ start   │ -p no-preload-842269 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-842269            │ jenkins │ v1.37.0 │ 09 Dec 25 05:45 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-262540 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ newest-cni-262540            │ jenkins │ v1.37.0 │ 09 Dec 25 05:49 UTC │                     │
	│ stop    │ -p newest-cni-262540 --alsologtostderr -v=3                                                                                                                                                                                                                │ newest-cni-262540            │ jenkins │ v1.37.0 │ 09 Dec 25 05:50 UTC │ 09 Dec 25 05:50 UTC │
	│ addons  │ enable dashboard -p newest-cni-262540 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                               │ newest-cni-262540            │ jenkins │ v1.37.0 │ 09 Dec 25 05:50 UTC │ 09 Dec 25 05:50 UTC │
	│ start   │ -p newest-cni-262540 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-262540            │ jenkins │ v1.37.0 │ 09 Dec 25 05:50 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─
────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/09 05:50:48
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 05:50:48.368732 1437114 out.go:360] Setting OutFile to fd 1 ...
	I1209 05:50:48.368913 1437114 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 05:50:48.368940 1437114 out.go:374] Setting ErrFile to fd 2...
	I1209 05:50:48.368958 1437114 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 05:50:48.369216 1437114 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-1142328/.minikube/bin
	I1209 05:50:48.369601 1437114 out.go:368] Setting JSON to false
	I1209 05:50:48.370536 1437114 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":30772,"bootTime":1765228677,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1209 05:50:48.370622 1437114 start.go:143] virtualization:  
	I1209 05:50:48.373806 1437114 out.go:179] * [newest-cni-262540] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1209 05:50:48.377517 1437114 out.go:179]   - MINIKUBE_LOCATION=22081
	I1209 05:50:48.377579 1437114 notify.go:221] Checking for updates...
	I1209 05:50:48.383314 1437114 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 05:50:48.386284 1437114 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22081-1142328/kubeconfig
	I1209 05:50:48.389132 1437114 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-1142328/.minikube
	I1209 05:50:48.392076 1437114 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1209 05:50:48.394975 1437114 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 05:50:48.398361 1437114 config.go:182] Loaded profile config "newest-cni-262540": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1209 05:50:48.398977 1437114 driver.go:422] Setting default libvirt URI to qemu:///system
	I1209 05:50:48.429565 1437114 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1209 05:50:48.429674 1437114 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 05:50:48.493190 1437114 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-09 05:50:48.483865172 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1209 05:50:48.493298 1437114 docker.go:319] overlay module found
	I1209 05:50:48.496461 1437114 out.go:179] * Using the docker driver based on existing profile
	I1209 05:50:48.499256 1437114 start.go:309] selected driver: docker
	I1209 05:50:48.499276 1437114 start.go:927] validating driver "docker" against &{Name:newest-cni-262540 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-262540 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString
: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 05:50:48.499393 1437114 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 05:50:48.500188 1437114 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 05:50:48.552839 1437114 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-09 05:50:48.544121972 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1209 05:50:48.553181 1437114 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1209 05:50:48.553214 1437114 cni.go:84] Creating CNI manager for ""
	I1209 05:50:48.553271 1437114 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1209 05:50:48.553312 1437114 start.go:353] cluster config:
	{Name:newest-cni-262540 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-262540 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 05:50:48.558270 1437114 out.go:179] * Starting "newest-cni-262540" primary control-plane node in "newest-cni-262540" cluster
	I1209 05:50:48.560987 1437114 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1209 05:50:48.563913 1437114 out.go:179] * Pulling base image v0.0.48-1765184860-22066 ...
	I1209 05:50:48.566628 1437114 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1209 05:50:48.566677 1437114 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1209 05:50:48.566701 1437114 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local docker daemon
	I1209 05:50:48.566709 1437114 cache.go:65] Caching tarball of preloaded images
	I1209 05:50:48.566793 1437114 preload.go:238] Found /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1209 05:50:48.566803 1437114 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1209 05:50:48.566914 1437114 profile.go:143] Saving config to /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/config.json ...
	I1209 05:50:48.585366 1437114 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local docker daemon, skipping pull
	I1209 05:50:48.585390 1437114 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c exists in daemon, skipping load
	I1209 05:50:48.585410 1437114 cache.go:243] Successfully downloaded all kic artifacts
	I1209 05:50:48.585447 1437114 start.go:360] acquireMachinesLock for newest-cni-262540: {Name:mk272d84ff1bc8c8949f2f0b1f608a7519899d10 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 05:50:48.585504 1437114 start.go:364] duration metric: took 35.806µs to acquireMachinesLock for "newest-cni-262540"
	I1209 05:50:48.585529 1437114 start.go:96] Skipping create...Using existing machine configuration
	I1209 05:50:48.585539 1437114 fix.go:54] fixHost starting: 
	I1209 05:50:48.585799 1437114 cli_runner.go:164] Run: docker container inspect newest-cni-262540 --format={{.State.Status}}
	I1209 05:50:48.601614 1437114 fix.go:112] recreateIfNeeded on newest-cni-262540: state=Stopped err=<nil>
	W1209 05:50:48.601645 1437114 fix.go:138] unexpected machine state, will restart: <nil>
	W1209 05:50:45.187180 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:50:47.684513 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	I1209 05:50:48.604910 1437114 out.go:252] * Restarting existing docker container for "newest-cni-262540" ...
	I1209 05:50:48.604997 1437114 cli_runner.go:164] Run: docker start newest-cni-262540
	I1209 05:50:48.871934 1437114 cli_runner.go:164] Run: docker container inspect newest-cni-262540 --format={{.State.Status}}
	I1209 05:50:48.896820 1437114 kic.go:430] container "newest-cni-262540" state is running.
	I1209 05:50:48.898586 1437114 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-262540
	I1209 05:50:48.919622 1437114 profile.go:143] Saving config to /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/config.json ...
	I1209 05:50:48.919952 1437114 machine.go:94] provisionDockerMachine start ...
	I1209 05:50:48.920090 1437114 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-262540
	I1209 05:50:48.944382 1437114 main.go:143] libmachine: Using SSH client type: native
	I1209 05:50:48.944721 1437114 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 34215 <nil> <nil>}
	I1209 05:50:48.944730 1437114 main.go:143] libmachine: About to run SSH command:
	hostname
	I1209 05:50:48.945423 1437114 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:54144->127.0.0.1:34215: read: connection reset by peer
	I1209 05:50:52.103931 1437114 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-262540
	
	I1209 05:50:52.103958 1437114 ubuntu.go:182] provisioning hostname "newest-cni-262540"
	I1209 05:50:52.104072 1437114 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-262540
	I1209 05:50:52.121462 1437114 main.go:143] libmachine: Using SSH client type: native
	I1209 05:50:52.121778 1437114 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 34215 <nil> <nil>}
	I1209 05:50:52.121795 1437114 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-262540 && echo "newest-cni-262540" | sudo tee /etc/hostname
	I1209 05:50:52.280621 1437114 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-262540
	
	I1209 05:50:52.280705 1437114 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-262540
	I1209 05:50:52.301681 1437114 main.go:143] libmachine: Using SSH client type: native
	I1209 05:50:52.301997 1437114 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 34215 <nil> <nil>}
	I1209 05:50:52.302019 1437114 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-262540' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-262540/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-262540' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 05:50:52.452274 1437114 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1209 05:50:52.452304 1437114 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22081-1142328/.minikube CaCertPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22081-1142328/.minikube}
	I1209 05:50:52.452324 1437114 ubuntu.go:190] setting up certificates
	I1209 05:50:52.452332 1437114 provision.go:84] configureAuth start
	I1209 05:50:52.452391 1437114 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-262540
	I1209 05:50:52.475825 1437114 provision.go:143] copyHostCerts
	I1209 05:50:52.475907 1437114 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.pem, removing ...
	I1209 05:50:52.475921 1437114 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.pem
	I1209 05:50:52.475999 1437114 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.pem (1078 bytes)
	I1209 05:50:52.476136 1437114 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-1142328/.minikube/cert.pem, removing ...
	I1209 05:50:52.476147 1437114 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-1142328/.minikube/cert.pem
	I1209 05:50:52.476175 1437114 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22081-1142328/.minikube/cert.pem (1123 bytes)
	I1209 05:50:52.476288 1437114 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-1142328/.minikube/key.pem, removing ...
	I1209 05:50:52.476322 1437114 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-1142328/.minikube/key.pem
	I1209 05:50:52.476364 1437114 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22081-1142328/.minikube/key.pem (1675 bytes)
	I1209 05:50:52.476440 1437114 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca-key.pem org=jenkins.newest-cni-262540 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-262540]
	I1209 05:50:52.561012 1437114 provision.go:177] copyRemoteCerts
	I1209 05:50:52.561084 1437114 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 05:50:52.561133 1437114 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-262540
	I1209 05:50:52.578674 1437114 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34215 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/newest-cni-262540/id_rsa Username:docker}
	I1209 05:50:52.685758 1437114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1209 05:50:52.702408 1437114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1209 05:50:52.719173 1437114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1671 bytes)
	I1209 05:50:52.736435 1437114 provision.go:87] duration metric: took 284.081054ms to configureAuth
	I1209 05:50:52.736462 1437114 ubuntu.go:206] setting minikube options for container-runtime
	I1209 05:50:52.736672 1437114 config.go:182] Loaded profile config "newest-cni-262540": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1209 05:50:52.736698 1437114 machine.go:97] duration metric: took 3.816733312s to provisionDockerMachine
	I1209 05:50:52.736707 1437114 start.go:293] postStartSetup for "newest-cni-262540" (driver="docker")
	I1209 05:50:52.736719 1437114 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 05:50:52.736771 1437114 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 05:50:52.736819 1437114 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-262540
	I1209 05:50:52.753733 1437114 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34215 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/newest-cni-262540/id_rsa Username:docker}
	I1209 05:50:52.859644 1437114 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 05:50:52.862806 1437114 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1209 05:50:52.862830 1437114 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1209 05:50:52.862841 1437114 filesync.go:126] Scanning /home/jenkins/minikube-integration/22081-1142328/.minikube/addons for local assets ...
	I1209 05:50:52.862893 1437114 filesync.go:126] Scanning /home/jenkins/minikube-integration/22081-1142328/.minikube/files for local assets ...
	I1209 05:50:52.862974 1437114 filesync.go:149] local asset: /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/ssl/certs/11442312.pem -> 11442312.pem in /etc/ssl/certs
	I1209 05:50:52.863076 1437114 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 05:50:52.870063 1437114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/ssl/certs/11442312.pem --> /etc/ssl/certs/11442312.pem (1708 bytes)
	I1209 05:50:52.886852 1437114 start.go:296] duration metric: took 150.129481ms for postStartSetup
	I1209 05:50:52.886932 1437114 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1209 05:50:52.887020 1437114 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-262540
	I1209 05:50:52.904086 1437114 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34215 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/newest-cni-262540/id_rsa Username:docker}
	I1209 05:50:53.006063 1437114 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1209 05:50:53.011716 1437114 fix.go:56] duration metric: took 4.426170276s for fixHost
	I1209 05:50:53.011745 1437114 start.go:83] releasing machines lock for "newest-cni-262540", held for 4.426228294s
	I1209 05:50:53.011812 1437114 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-262540
	I1209 05:50:53.028468 1437114 ssh_runner.go:195] Run: cat /version.json
	I1209 05:50:53.028532 1437114 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-262540
	I1209 05:50:53.028815 1437114 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 05:50:53.028886 1437114 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-262540
	I1209 05:50:53.050698 1437114 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34215 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/newest-cni-262540/id_rsa Username:docker}
	I1209 05:50:53.061651 1437114 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34215 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/newest-cni-262540/id_rsa Username:docker}
	I1209 05:50:53.151708 1437114 ssh_runner.go:195] Run: systemctl --version
	I1209 05:50:53.249572 1437114 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 05:50:53.254184 1437114 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 05:50:53.254256 1437114 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 05:50:53.261725 1437114 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1209 05:50:53.261749 1437114 start.go:496] detecting cgroup driver to use...
	I1209 05:50:53.261780 1437114 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1209 05:50:53.261828 1437114 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1209 05:50:53.278531 1437114 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1209 05:50:53.291190 1437114 docker.go:218] disabling cri-docker service (if available) ...
	I1209 05:50:53.291252 1437114 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 05:50:53.306525 1437114 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 05:50:53.319477 1437114 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 05:50:53.424347 1437114 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 05:50:53.539911 1437114 docker.go:234] disabling docker service ...
	I1209 05:50:53.540005 1437114 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 05:50:53.555506 1437114 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 05:50:53.568379 1437114 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 05:50:53.684143 1437114 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 05:50:53.819865 1437114 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 05:50:53.834400 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 05:50:53.848555 1437114 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1209 05:50:53.857346 1437114 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1209 05:50:53.866232 1437114 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1209 05:50:53.866362 1437114 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1209 05:50:53.875141 1437114 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1209 05:50:53.883775 1437114 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1209 05:50:53.892743 1437114 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1209 05:50:53.901606 1437114 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 05:50:53.909694 1437114 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1209 05:50:53.918469 1437114 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1209 05:50:53.927272 1437114 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1209 05:50:53.939275 1437114 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 05:50:53.948029 1437114 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 05:50:53.956257 1437114 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 05:50:54.075166 1437114 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1209 05:50:54.195479 1437114 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1209 05:50:54.195546 1437114 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1209 05:50:54.199412 1437114 start.go:564] Will wait 60s for crictl version
	I1209 05:50:54.199478 1437114 ssh_runner.go:195] Run: which crictl
	I1209 05:50:54.203349 1437114 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1209 05:50:54.229036 1437114 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1209 05:50:54.229147 1437114 ssh_runner.go:195] Run: containerd --version
	I1209 05:50:54.257755 1437114 ssh_runner.go:195] Run: containerd --version
	I1209 05:50:54.281890 1437114 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	W1209 05:50:50.184270 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:50:52.684275 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	I1209 05:50:54.284780 1437114 cli_runner.go:164] Run: docker network inspect newest-cni-262540 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1209 05:50:54.300458 1437114 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1209 05:50:54.304227 1437114 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 05:50:54.316829 1437114 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1209 05:50:54.319602 1437114 kubeadm.go:884] updating cluster {Name:newest-cni-262540 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-262540 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 05:50:54.319761 1437114 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1209 05:50:54.319850 1437114 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 05:50:54.344882 1437114 containerd.go:627] all images are preloaded for containerd runtime.
	I1209 05:50:54.344907 1437114 containerd.go:534] Images already preloaded, skipping extraction
	I1209 05:50:54.344969 1437114 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 05:50:54.368351 1437114 containerd.go:627] all images are preloaded for containerd runtime.
	I1209 05:50:54.368375 1437114 cache_images.go:86] Images are preloaded, skipping loading
	I1209 05:50:54.368384 1437114 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1209 05:50:54.368487 1437114 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-262540 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-262540 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 05:50:54.368554 1437114 ssh_runner.go:195] Run: sudo crictl info
	I1209 05:50:54.396480 1437114 cni.go:84] Creating CNI manager for ""
	I1209 05:50:54.396505 1437114 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1209 05:50:54.396527 1437114 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1209 05:50:54.396551 1437114 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-262540 NodeName:newest-cni-262540 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1209 05:50:54.396668 1437114 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-262540"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 05:50:54.396755 1437114 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1209 05:50:54.404357 1437114 binaries.go:51] Found k8s binaries, skipping transfer
	I1209 05:50:54.404462 1437114 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1209 05:50:54.411829 1437114 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1209 05:50:54.423915 1437114 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1209 05:50:54.436484 1437114 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1209 05:50:54.448905 1437114 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1209 05:50:54.452398 1437114 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 05:50:54.461840 1437114 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 05:50:54.574379 1437114 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 05:50:54.590263 1437114 certs.go:69] Setting up /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540 for IP: 192.168.76.2
	I1209 05:50:54.590332 1437114 certs.go:195] generating shared ca certs ...
	I1209 05:50:54.590364 1437114 certs.go:227] acquiring lock for ca certs: {Name:mk15788702f8c4e23b5aeab3f44961d296fab259 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 05:50:54.590561 1437114 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.key
	I1209 05:50:54.590652 1437114 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/proxy-client-ca.key
	I1209 05:50:54.590688 1437114 certs.go:257] generating profile certs ...
	I1209 05:50:54.590838 1437114 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/client.key
	I1209 05:50:54.590942 1437114 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/apiserver.key.0ed49b31
	I1209 05:50:54.591051 1437114 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/proxy-client.key
	I1209 05:50:54.591210 1437114 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/1144231.pem (1338 bytes)
	W1209 05:50:54.591287 1437114 certs.go:480] ignoring /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/1144231_empty.pem, impossibly tiny 0 bytes
	I1209 05:50:54.591314 1437114 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca-key.pem (1679 bytes)
	I1209 05:50:54.591380 1437114 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem (1078 bytes)
	I1209 05:50:54.591442 1437114 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/cert.pem (1123 bytes)
	I1209 05:50:54.591490 1437114 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/key.pem (1675 bytes)
	I1209 05:50:54.591576 1437114 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/ssl/certs/11442312.pem (1708 bytes)
	I1209 05:50:54.592436 1437114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 05:50:54.617399 1437114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1209 05:50:54.636943 1437114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 05:50:54.658494 1437114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1209 05:50:54.674958 1437114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1209 05:50:54.701134 1437114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1209 05:50:54.720347 1437114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 05:50:54.738904 1437114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1209 05:50:54.758253 1437114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/1144231.pem --> /usr/share/ca-certificates/1144231.pem (1338 bytes)
	I1209 05:50:54.775204 1437114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/ssl/certs/11442312.pem --> /usr/share/ca-certificates/11442312.pem (1708 bytes)
	I1209 05:50:54.791963 1437114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 05:50:54.809403 1437114 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 05:50:54.821958 1437114 ssh_runner.go:195] Run: openssl version
	I1209 05:50:54.828113 1437114 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1209 05:50:54.835305 1437114 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1209 05:50:54.842458 1437114 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 05:50:54.846155 1437114 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 04:09 /usr/share/ca-certificates/minikubeCA.pem
	I1209 05:50:54.846222 1437114 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 05:50:54.887330 1437114 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1209 05:50:54.894630 1437114 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1144231.pem
	I1209 05:50:54.901722 1437114 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1144231.pem /etc/ssl/certs/1144231.pem
	I1209 05:50:54.909025 1437114 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1144231.pem
	I1209 05:50:54.912514 1437114 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 04:18 /usr/share/ca-certificates/1144231.pem
	I1209 05:50:54.912621 1437114 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1144231.pem
	I1209 05:50:54.953649 1437114 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1209 05:50:54.960781 1437114 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/11442312.pem
	I1209 05:50:54.967822 1437114 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/11442312.pem /etc/ssl/certs/11442312.pem
	I1209 05:50:54.975177 1437114 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11442312.pem
	I1209 05:50:54.978699 1437114 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 04:18 /usr/share/ca-certificates/11442312.pem
	I1209 05:50:54.978782 1437114 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11442312.pem
	I1209 05:50:55.020640 1437114 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1209 05:50:55.034989 1437114 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 05:50:55.043885 1437114 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1209 05:50:55.090059 1437114 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1209 05:50:55.134954 1437114 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1209 05:50:55.180095 1437114 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1209 05:50:55.223090 1437114 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1209 05:50:55.265103 1437114 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1209 05:50:55.306238 1437114 kubeadm.go:401] StartCluster: {Name:newest-cni-262540 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-262540 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 05:50:55.306348 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1209 05:50:55.306413 1437114 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 05:50:55.335032 1437114 cri.go:89] found id: ""
	I1209 05:50:55.335115 1437114 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1209 05:50:55.355619 1437114 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1209 05:50:55.355640 1437114 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1209 05:50:55.355691 1437114 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1209 05:50:55.363844 1437114 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1209 05:50:55.364433 1437114 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-262540" does not appear in /home/jenkins/minikube-integration/22081-1142328/kubeconfig
	I1209 05:50:55.364754 1437114 kubeconfig.go:62] /home/jenkins/minikube-integration/22081-1142328/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-262540" cluster setting kubeconfig missing "newest-cni-262540" context setting]
	I1209 05:50:55.365251 1437114 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1142328/kubeconfig: {Name:mk0dc127429f88dc4fdfb2d110deebc58207c1b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 05:50:55.366765 1437114 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1209 05:50:55.375221 1437114 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1209 05:50:55.375252 1437114 kubeadm.go:602] duration metric: took 19.605753ms to restartPrimaryControlPlane
	I1209 05:50:55.375261 1437114 kubeadm.go:403] duration metric: took 69.033781ms to StartCluster
	I1209 05:50:55.375276 1437114 settings.go:142] acquiring lock: {Name:mk8fa744e3d74bf8a1cbf5ac275c9f1969ad91a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 05:50:55.375345 1437114 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22081-1142328/kubeconfig
	I1209 05:50:55.376265 1437114 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1142328/kubeconfig: {Name:mk0dc127429f88dc4fdfb2d110deebc58207c1b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 05:50:55.376705 1437114 config.go:182] Loaded profile config "newest-cni-262540": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1209 05:50:55.376504 1437114 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1209 05:50:55.376810 1437114 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1209 05:50:55.377093 1437114 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-262540"
	I1209 05:50:55.377111 1437114 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-262540"
	I1209 05:50:55.377136 1437114 host.go:66] Checking if "newest-cni-262540" exists ...
	I1209 05:50:55.377594 1437114 cli_runner.go:164] Run: docker container inspect newest-cni-262540 --format={{.State.Status}}
	I1209 05:50:55.377785 1437114 addons.go:70] Setting dashboard=true in profile "newest-cni-262540"
	I1209 05:50:55.377813 1437114 addons.go:239] Setting addon dashboard=true in "newest-cni-262540"
	W1209 05:50:55.377825 1437114 addons.go:248] addon dashboard should already be in state true
	I1209 05:50:55.377849 1437114 host.go:66] Checking if "newest-cni-262540" exists ...
	I1209 05:50:55.378304 1437114 cli_runner.go:164] Run: docker container inspect newest-cni-262540 --format={{.State.Status}}
	I1209 05:50:55.378820 1437114 addons.go:70] Setting default-storageclass=true in profile "newest-cni-262540"
	I1209 05:50:55.378864 1437114 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-262540"
	I1209 05:50:55.379212 1437114 cli_runner.go:164] Run: docker container inspect newest-cni-262540 --format={{.State.Status}}
	I1209 05:50:55.381896 1437114 out.go:179] * Verifying Kubernetes components...
	I1209 05:50:55.388614 1437114 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 05:50:55.438264 1437114 addons.go:239] Setting addon default-storageclass=true in "newest-cni-262540"
	I1209 05:50:55.438303 1437114 host.go:66] Checking if "newest-cni-262540" exists ...
	I1209 05:50:55.438728 1437114 cli_runner.go:164] Run: docker container inspect newest-cni-262540 --format={{.State.Status}}
	I1209 05:50:55.440785 1437114 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 05:50:55.442715 1437114 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 05:50:55.442743 1437114 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1209 05:50:55.442806 1437114 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-262540
	I1209 05:50:55.442947 1437114 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1209 05:50:55.445621 1437114 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1209 05:50:55.449877 1437114 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1209 05:50:55.449904 1437114 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1209 05:50:55.449976 1437114 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-262540
	I1209 05:50:55.481759 1437114 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34215 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/newest-cni-262540/id_rsa Username:docker}
	I1209 05:50:55.496417 1437114 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1209 05:50:55.496440 1437114 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1209 05:50:55.496499 1437114 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-262540
	I1209 05:50:55.515362 1437114 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34215 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/newest-cni-262540/id_rsa Username:docker}
	I1209 05:50:55.537402 1437114 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34215 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/newest-cni-262540/id_rsa Username:docker}
	I1209 05:50:55.642792 1437114 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 05:50:55.677774 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 05:50:55.711653 1437114 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1209 05:50:55.711691 1437114 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1209 05:50:55.713691 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1209 05:50:55.771340 1437114 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1209 05:50:55.771368 1437114 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1209 05:50:55.785331 1437114 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1209 05:50:55.785403 1437114 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1209 05:50:55.798961 1437114 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1209 05:50:55.798984 1437114 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1209 05:50:55.811558 1437114 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1209 05:50:55.811625 1437114 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1209 05:50:55.824010 1437114 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1209 05:50:55.824113 1437114 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1209 05:50:55.836722 1437114 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1209 05:50:55.836745 1437114 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1209 05:50:55.849061 1437114 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1209 05:50:55.849126 1437114 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1209 05:50:55.862091 1437114 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1209 05:50:55.862114 1437114 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1209 05:50:55.875010 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1209 05:50:56.435552 1437114 api_server.go:52] waiting for apiserver process to appear ...
	W1209 05:50:56.435748 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:50:56.435801 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:50:56.435838 1437114 retry.go:31] will retry after 228.095144ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 05:50:56.435700 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:50:56.435898 1437114 retry.go:31] will retry after 361.053359ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 05:50:56.436142 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:50:56.436189 1437114 retry.go:31] will retry after 212.683869ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:50:56.649580 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1209 05:50:56.665010 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1209 05:50:56.729564 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:50:56.729662 1437114 retry.go:31] will retry after 263.201205ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 05:50:56.751560 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:50:56.751590 1437114 retry.go:31] will retry after 282.08987ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:50:56.797828 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1209 05:50:56.855489 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:50:56.855525 1437114 retry.go:31] will retry after 519.882573ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:50:56.936655 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:50:56.993111 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1209 05:50:57.034512 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1209 05:50:57.059780 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:50:57.059861 1437114 retry.go:31] will retry after 724.517068ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 05:50:57.095702 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:50:57.095733 1437114 retry.go:31] will retry after 773.591416ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:50:57.376312 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1209 05:50:57.435557 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:50:57.435589 1437114 retry.go:31] will retry after 453.196958ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:50:57.436773 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:50:57.784620 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1209 05:50:57.844755 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:50:57.844791 1437114 retry.go:31] will retry after 1.262011023s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:50:57.869923 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 05:50:57.889536 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 05:50:57.936212 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1209 05:50:57.961431 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:50:57.961468 1437114 retry.go:31] will retry after 546.501311ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 05:50:58.032466 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:50:58.032501 1437114 retry.go:31] will retry after 1.229436669s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 05:50:54.684397 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:50:57.184110 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:50:59.184561 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	I1209 05:50:58.436310 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:50:58.508935 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1209 05:50:58.565163 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:50:58.565196 1437114 retry.go:31] will retry after 1.407912766s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:50:58.936676 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:50:59.107417 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1209 05:50:59.166291 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:50:59.166364 1437114 retry.go:31] will retry after 928.374807ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:50:59.262572 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1209 05:50:59.321942 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:50:59.321975 1437114 retry.go:31] will retry after 837.961471ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:50:59.436172 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:50:59.936839 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:50:59.973278 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 05:51:00.094961 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1209 05:51:00.122388 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:00.122508 1437114 retry.go:31] will retry after 2.37581771s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:00.163516 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1209 05:51:00.369038 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:00.369122 1437114 retry.go:31] will retry after 1.02409357s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 05:51:00.430845 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:00.430881 1437114 retry.go:31] will retry after 1.008529781s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:00.435975 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:00.935928 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:01.393811 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1209 05:51:01.436520 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:01.440060 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1209 05:51:01.479948 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:01.480008 1437114 retry.go:31] will retry after 3.887040249s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 05:51:01.521362 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:01.521394 1437114 retry.go:31] will retry after 2.488257731s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:01.936891 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:02.436059 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:02.499505 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1209 05:51:02.558807 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:02.558839 1437114 retry.go:31] will retry after 1.68559081s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:02.936227 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1209 05:51:01.683581 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:51:04.183570 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	I1209 05:51:03.436252 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:03.936492 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:04.009914 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1209 05:51:04.068567 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:04.068604 1437114 retry.go:31] will retry after 3.558332748s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:04.244680 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1209 05:51:04.309239 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:04.309330 1437114 retry.go:31] will retry after 5.213787505s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:04.436559 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:04.936651 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:05.367810 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1209 05:51:05.433548 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:05.433586 1437114 retry.go:31] will retry after 5.477878375s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:05.436872 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:05.936073 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:06.436593 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:06.936543 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:07.436871 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:07.628150 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1209 05:51:07.690629 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:07.690661 1437114 retry.go:31] will retry after 6.157660473s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:07.935908 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1209 05:51:06.183630 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:51:08.683544 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	I1209 05:51:08.436122 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:08.935959 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:09.436970 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:09.523671 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1209 05:51:09.581839 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:09.581914 1437114 retry.go:31] will retry after 9.601279523s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:09.936233 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:10.436178 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:10.911744 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1209 05:51:10.936618 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1209 05:51:11.040149 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:11.040187 1437114 retry.go:31] will retry after 9.211684326s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:11.436896 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:11.936862 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:12.435946 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:12.936781 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1209 05:51:10.683655 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:51:12.684274 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	I1209 05:51:13.436827 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:13.848647 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1209 05:51:13.909374 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:13.909406 1437114 retry.go:31] will retry after 5.044533036s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:13.936521 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:14.436557 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:14.935977 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:15.436310 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:15.936335 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:16.436628 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:16.936535 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:17.436311 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:17.935962 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1209 05:51:15.183508 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:51:17.183575 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:51:19.184498 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	I1209 05:51:18.435898 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:18.936142 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:18.955073 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1209 05:51:19.020072 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:19.020104 1437114 retry.go:31] will retry after 11.951102235s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:19.184688 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1209 05:51:19.284505 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:19.284538 1437114 retry.go:31] will retry after 12.030085055s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:19.435928 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:19.936763 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:20.252740 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1209 05:51:20.316752 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:20.316784 1437114 retry.go:31] will retry after 7.019613017s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:20.436227 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:20.936875 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:21.435907 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:21.935963 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:22.436158 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:22.936474 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1209 05:51:21.683564 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:51:23.683626 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:51:26.184579 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	I1209 05:51:27.683214 1429857 node_ready.go:38] duration metric: took 6m0.000146062s for node "no-preload-842269" to be "Ready" ...
	I1209 05:51:27.686512 1429857 out.go:203] 
	W1209 05:51:27.689522 1429857 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1209 05:51:27.689540 1429857 out.go:285] * 
	W1209 05:51:27.691657 1429857 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 05:51:27.694499 1429857 out.go:203] 
	I1209 05:51:23.436353 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:23.936003 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:24.435917 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:24.936039 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:25.435883 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:25.936680 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:26.436359 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:26.936582 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:27.336866 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1209 05:51:27.401213 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:27.401248 1437114 retry.go:31] will retry after 15.185111317s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:27.436540 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:27.936409 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:28.436146 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:28.936943 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:29.435893 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:29.936169 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:30.435922 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:30.936805 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:30.972257 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1209 05:51:31.030985 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:31.031019 1437114 retry.go:31] will retry after 20.454574576s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:31.315422 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1209 05:51:31.375282 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:31.375315 1437114 retry.go:31] will retry after 20.731698158s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:31.436402 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:31.936683 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:32.436139 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:32.936168 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:33.436458 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:33.936647 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:34.435986 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:34.935949 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:35.436254 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:35.936501 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:36.436171 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:36.936413 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:37.436503 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:37.936112 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:38.436260 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:38.936155 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:39.435919 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:39.935963 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:40.435931 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:40.936251 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:41.435937 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:41.936193 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:42.436356 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:42.587277 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1209 05:51:42.649100 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:42.649137 1437114 retry.go:31] will retry after 20.728553891s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:42.936771 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:43.435958 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:43.936674 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:44.436708 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:44.936177 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:45.436620 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:45.936616 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:46.436000 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:46.936141 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:47.435976 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:47.936139 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:48.436162 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:48.936736 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:49.436154 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:49.936192 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:50.436517 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:50.936806 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:51.436499 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:51.485950 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1209 05:51:51.548585 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:51.548614 1437114 retry.go:31] will retry after 47.596790172s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:51.936087 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:52.108051 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1209 05:51:52.167486 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:52.167519 1437114 retry.go:31] will retry after 29.777424896s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:52.436906 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:52.936203 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:53.436751 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:53.936576 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:54.436593 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:54.935988 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:55.436246 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:51:55.436382 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:51:55.467996 1437114 cri.go:89] found id: ""
	I1209 05:51:55.468084 1437114 logs.go:282] 0 containers: []
	W1209 05:51:55.468107 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:51:55.468125 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:51:55.468223 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:51:55.504401 1437114 cri.go:89] found id: ""
	I1209 05:51:55.504427 1437114 logs.go:282] 0 containers: []
	W1209 05:51:55.504434 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:51:55.504440 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:51:55.504513 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:51:55.530581 1437114 cri.go:89] found id: ""
	I1209 05:51:55.530606 1437114 logs.go:282] 0 containers: []
	W1209 05:51:55.530615 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:51:55.530621 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:51:55.530689 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:51:55.555637 1437114 cri.go:89] found id: ""
	I1209 05:51:55.555708 1437114 logs.go:282] 0 containers: []
	W1209 05:51:55.555744 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:51:55.555768 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:51:55.555867 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:51:55.582108 1437114 cri.go:89] found id: ""
	I1209 05:51:55.582132 1437114 logs.go:282] 0 containers: []
	W1209 05:51:55.582141 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:51:55.582148 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:51:55.582242 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:51:55.606067 1437114 cri.go:89] found id: ""
	I1209 05:51:55.606092 1437114 logs.go:282] 0 containers: []
	W1209 05:51:55.606101 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:51:55.606119 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:51:55.606179 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:51:55.632387 1437114 cri.go:89] found id: ""
	I1209 05:51:55.632413 1437114 logs.go:282] 0 containers: []
	W1209 05:51:55.632422 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:51:55.632428 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:51:55.632489 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:51:55.657181 1437114 cri.go:89] found id: ""
	I1209 05:51:55.657207 1437114 logs.go:282] 0 containers: []
	W1209 05:51:55.657215 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:51:55.657224 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:51:55.657236 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:51:55.718829 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:51:55.710893    1841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:51:55.711561    1841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:51:55.713071    1841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:51:55.713520    1841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:51:55.714997    1841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:51:55.710893    1841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:51:55.711561    1841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:51:55.713071    1841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:51:55.713520    1841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:51:55.714997    1841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:51:55.718849 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:51:55.718861 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:51:55.745044 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:51:55.745076 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:51:55.779273 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:51:55.779300 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:51:55.836724 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:51:55.836759 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:51:58.354526 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:58.364806 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:51:58.364873 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:51:58.394168 1437114 cri.go:89] found id: ""
	I1209 05:51:58.394193 1437114 logs.go:282] 0 containers: []
	W1209 05:51:58.394201 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:51:58.394213 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:51:58.394269 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:51:58.419742 1437114 cri.go:89] found id: ""
	I1209 05:51:58.419776 1437114 logs.go:282] 0 containers: []
	W1209 05:51:58.419785 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:51:58.419792 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:51:58.419859 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:51:58.464612 1437114 cri.go:89] found id: ""
	I1209 05:51:58.464637 1437114 logs.go:282] 0 containers: []
	W1209 05:51:58.464646 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:51:58.464652 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:51:58.464707 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:51:58.496121 1437114 cri.go:89] found id: ""
	I1209 05:51:58.496148 1437114 logs.go:282] 0 containers: []
	W1209 05:51:58.496157 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:51:58.496163 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:51:58.496259 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:51:58.520390 1437114 cri.go:89] found id: ""
	I1209 05:51:58.520429 1437114 logs.go:282] 0 containers: []
	W1209 05:51:58.520439 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:51:58.520452 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:51:58.520531 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:51:58.546795 1437114 cri.go:89] found id: ""
	I1209 05:51:58.546828 1437114 logs.go:282] 0 containers: []
	W1209 05:51:58.546838 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:51:58.546847 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:51:58.546911 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:51:58.570252 1437114 cri.go:89] found id: ""
	I1209 05:51:58.570279 1437114 logs.go:282] 0 containers: []
	W1209 05:51:58.570289 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:51:58.570295 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:51:58.570359 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:51:58.594153 1437114 cri.go:89] found id: ""
	I1209 05:51:58.594178 1437114 logs.go:282] 0 containers: []
	W1209 05:51:58.594187 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:51:58.594195 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:51:58.594207 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:51:58.621218 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:51:58.621244 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:51:58.675840 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:51:58.675877 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:51:58.691699 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:51:58.691734 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:51:58.755150 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:51:58.747260    1968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:51:58.747839    1968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:51:58.749288    1968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:51:58.749743    1968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:51:58.751180    1968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:51:58.747260    1968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:51:58.747839    1968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:51:58.749288    1968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:51:58.749743    1968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:51:58.751180    1968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:51:58.755171 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:51:58.755185 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:52:01.281475 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:52:01.293255 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:52:01.293329 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:52:01.318701 1437114 cri.go:89] found id: ""
	I1209 05:52:01.318740 1437114 logs.go:282] 0 containers: []
	W1209 05:52:01.318749 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:52:01.318757 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:52:01.318827 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:52:01.343120 1437114 cri.go:89] found id: ""
	I1209 05:52:01.343145 1437114 logs.go:282] 0 containers: []
	W1209 05:52:01.343154 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:52:01.343170 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:52:01.343228 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:52:01.367699 1437114 cri.go:89] found id: ""
	I1209 05:52:01.367725 1437114 logs.go:282] 0 containers: []
	W1209 05:52:01.367733 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:52:01.367749 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:52:01.367823 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:52:01.394578 1437114 cri.go:89] found id: ""
	I1209 05:52:01.394603 1437114 logs.go:282] 0 containers: []
	W1209 05:52:01.394612 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:52:01.394618 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:52:01.394677 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:52:01.423264 1437114 cri.go:89] found id: ""
	I1209 05:52:01.423290 1437114 logs.go:282] 0 containers: []
	W1209 05:52:01.423299 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:52:01.423305 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:52:01.423367 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:52:01.460737 1437114 cri.go:89] found id: ""
	I1209 05:52:01.460764 1437114 logs.go:282] 0 containers: []
	W1209 05:52:01.460772 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:52:01.460778 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:52:01.460850 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:52:01.493246 1437114 cri.go:89] found id: ""
	I1209 05:52:01.493272 1437114 logs.go:282] 0 containers: []
	W1209 05:52:01.493281 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:52:01.493287 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:52:01.493364 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:52:01.517585 1437114 cri.go:89] found id: ""
	I1209 05:52:01.517612 1437114 logs.go:282] 0 containers: []
	W1209 05:52:01.517620 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:52:01.517630 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:52:01.517670 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:52:01.579907 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:52:01.571951    2063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:01.572467    2063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:01.574150    2063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:01.574485    2063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:01.575978    2063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:52:01.571951    2063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:01.572467    2063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:01.574150    2063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:01.574485    2063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:01.575978    2063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:52:01.579934 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:52:01.579951 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:52:01.605933 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:52:01.605968 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:52:01.633450 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:52:01.633476 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:52:01.690768 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:52:01.690809 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:52:03.378312 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1209 05:52:03.443761 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:52:03.443892 1437114 retry.go:31] will retry after 46.030372913s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:52:04.208154 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:52:04.218947 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:52:04.219023 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:52:04.250185 1437114 cri.go:89] found id: ""
	I1209 05:52:04.250210 1437114 logs.go:282] 0 containers: []
	W1209 05:52:04.250219 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:52:04.250226 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:52:04.250336 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:52:04.278437 1437114 cri.go:89] found id: ""
	I1209 05:52:04.278462 1437114 logs.go:282] 0 containers: []
	W1209 05:52:04.278471 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:52:04.278477 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:52:04.278540 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:52:04.306148 1437114 cri.go:89] found id: ""
	I1209 05:52:04.306212 1437114 logs.go:282] 0 containers: []
	W1209 05:52:04.306227 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:52:04.306235 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:52:04.306294 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:52:04.330968 1437114 cri.go:89] found id: ""
	I1209 05:52:04.330995 1437114 logs.go:282] 0 containers: []
	W1209 05:52:04.331003 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:52:04.331014 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:52:04.331074 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:52:04.361139 1437114 cri.go:89] found id: ""
	I1209 05:52:04.361213 1437114 logs.go:282] 0 containers: []
	W1209 05:52:04.361228 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:52:04.361235 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:52:04.361292 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:52:04.384663 1437114 cri.go:89] found id: ""
	I1209 05:52:04.384728 1437114 logs.go:282] 0 containers: []
	W1209 05:52:04.384744 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:52:04.384751 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:52:04.384819 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:52:04.409163 1437114 cri.go:89] found id: ""
	I1209 05:52:04.409188 1437114 logs.go:282] 0 containers: []
	W1209 05:52:04.409196 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:52:04.409202 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:52:04.409260 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:52:04.438875 1437114 cri.go:89] found id: ""
	I1209 05:52:04.438901 1437114 logs.go:282] 0 containers: []
	W1209 05:52:04.438911 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:52:04.438920 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:52:04.438930 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:52:04.504081 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:52:04.504118 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:52:04.520282 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:52:04.520314 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:52:04.582173 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:52:04.574497    2189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:04.575080    2189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:04.576516    2189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:04.576898    2189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:04.578287    2189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:52:04.574497    2189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:04.575080    2189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:04.576516    2189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:04.576898    2189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:04.578287    2189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:52:04.582197 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:52:04.582209 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:52:04.607423 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:52:04.607456 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:52:07.139347 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:52:07.149801 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:52:07.149872 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:52:07.174952 1437114 cri.go:89] found id: ""
	I1209 05:52:07.174980 1437114 logs.go:282] 0 containers: []
	W1209 05:52:07.174988 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:52:07.174995 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:52:07.175054 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:52:07.202325 1437114 cri.go:89] found id: ""
	I1209 05:52:07.202387 1437114 logs.go:282] 0 containers: []
	W1209 05:52:07.202418 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:52:07.202437 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:52:07.202533 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:52:07.232008 1437114 cri.go:89] found id: ""
	I1209 05:52:07.232092 1437114 logs.go:282] 0 containers: []
	W1209 05:52:07.232147 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:52:07.232170 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:52:07.232265 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:52:07.259048 1437114 cri.go:89] found id: ""
	I1209 05:52:07.259075 1437114 logs.go:282] 0 containers: []
	W1209 05:52:07.259084 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:52:07.259091 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:52:07.259147 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:52:07.283135 1437114 cri.go:89] found id: ""
	I1209 05:52:07.283161 1437114 logs.go:282] 0 containers: []
	W1209 05:52:07.283169 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:52:07.283175 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:52:07.283285 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:52:07.307259 1437114 cri.go:89] found id: ""
	I1209 05:52:07.307285 1437114 logs.go:282] 0 containers: []
	W1209 05:52:07.307294 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:52:07.307300 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:52:07.307357 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:52:07.331534 1437114 cri.go:89] found id: ""
	I1209 05:52:07.331604 1437114 logs.go:282] 0 containers: []
	W1209 05:52:07.331627 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:52:07.331645 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:52:07.331742 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:52:07.358525 1437114 cri.go:89] found id: ""
	I1209 05:52:07.358548 1437114 logs.go:282] 0 containers: []
	W1209 05:52:07.358557 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:52:07.358565 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:52:07.358577 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:52:07.424932 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:52:07.417064    2295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:07.417623    2295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:07.419222    2295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:07.419698    2295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:07.421122    2295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:52:07.417064    2295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:07.417623    2295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:07.419222    2295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:07.419698    2295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:07.421122    2295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:52:07.425003 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:52:07.425028 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:52:07.452549 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:52:07.452633 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:52:07.488600 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:52:07.488675 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:52:07.547568 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:52:07.547604 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:52:10.063961 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:52:10.075421 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:52:10.075510 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:52:10.106279 1437114 cri.go:89] found id: ""
	I1209 05:52:10.106307 1437114 logs.go:282] 0 containers: []
	W1209 05:52:10.106317 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:52:10.106323 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:52:10.106395 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:52:10.140825 1437114 cri.go:89] found id: ""
	I1209 05:52:10.140865 1437114 logs.go:282] 0 containers: []
	W1209 05:52:10.140874 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:52:10.140881 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:52:10.140961 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:52:10.166337 1437114 cri.go:89] found id: ""
	I1209 05:52:10.166364 1437114 logs.go:282] 0 containers: []
	W1209 05:52:10.166373 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:52:10.166380 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:52:10.166460 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:52:10.202390 1437114 cri.go:89] found id: ""
	I1209 05:52:10.202417 1437114 logs.go:282] 0 containers: []
	W1209 05:52:10.202426 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:52:10.202432 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:52:10.202541 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:52:10.230690 1437114 cri.go:89] found id: ""
	I1209 05:52:10.230716 1437114 logs.go:282] 0 containers: []
	W1209 05:52:10.230726 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:52:10.230733 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:52:10.230847 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:52:10.257345 1437114 cri.go:89] found id: ""
	I1209 05:52:10.257371 1437114 logs.go:282] 0 containers: []
	W1209 05:52:10.257380 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:52:10.257386 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:52:10.257452 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:52:10.282028 1437114 cri.go:89] found id: ""
	I1209 05:52:10.282053 1437114 logs.go:282] 0 containers: []
	W1209 05:52:10.282062 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:52:10.282069 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:52:10.282136 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:52:10.306484 1437114 cri.go:89] found id: ""
	I1209 05:52:10.306509 1437114 logs.go:282] 0 containers: []
	W1209 05:52:10.306519 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:52:10.306538 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:52:10.306550 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:52:10.334032 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:52:10.334059 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:52:10.396200 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:52:10.396241 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:52:10.412481 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:52:10.412513 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:52:10.512214 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:52:10.503459    2422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:10.504106    2422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:10.505795    2422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:10.506184    2422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:10.507800    2422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:52:10.503459    2422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:10.504106    2422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:10.505795    2422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:10.506184    2422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:10.507800    2422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:52:10.512237 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:52:10.512250 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:52:13.038285 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:52:13.048783 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:52:13.048856 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:52:13.073147 1437114 cri.go:89] found id: ""
	I1209 05:52:13.073174 1437114 logs.go:282] 0 containers: []
	W1209 05:52:13.073182 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:52:13.073189 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:52:13.073264 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:52:13.096887 1437114 cri.go:89] found id: ""
	I1209 05:52:13.096911 1437114 logs.go:282] 0 containers: []
	W1209 05:52:13.096919 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:52:13.096926 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:52:13.096983 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:52:13.120441 1437114 cri.go:89] found id: ""
	I1209 05:52:13.120466 1437114 logs.go:282] 0 containers: []
	W1209 05:52:13.120475 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:52:13.120482 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:52:13.120540 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:52:13.144403 1437114 cri.go:89] found id: ""
	I1209 05:52:13.144478 1437114 logs.go:282] 0 containers: []
	W1209 05:52:13.144494 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:52:13.144504 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:52:13.144576 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:52:13.168584 1437114 cri.go:89] found id: ""
	I1209 05:52:13.168610 1437114 logs.go:282] 0 containers: []
	W1209 05:52:13.168619 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:52:13.168626 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:52:13.168683 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:52:13.204797 1437114 cri.go:89] found id: ""
	I1209 05:52:13.204824 1437114 logs.go:282] 0 containers: []
	W1209 05:52:13.204833 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:52:13.204840 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:52:13.204899 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:52:13.231178 1437114 cri.go:89] found id: ""
	I1209 05:52:13.231205 1437114 logs.go:282] 0 containers: []
	W1209 05:52:13.231214 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:52:13.231220 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:52:13.231278 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:52:13.260307 1437114 cri.go:89] found id: ""
	I1209 05:52:13.260331 1437114 logs.go:282] 0 containers: []
	W1209 05:52:13.260341 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:52:13.260350 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:52:13.260361 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:52:13.286145 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:52:13.286182 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:52:13.315119 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:52:13.315147 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:52:13.369862 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:52:13.369894 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:52:13.385795 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:52:13.385822 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:52:13.451305 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:52:13.443201    2536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:13.444044    2536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:13.445720    2536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:13.446006    2536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:13.447466    2536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:52:13.443201    2536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:13.444044    2536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:13.445720    2536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:13.446006    2536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:13.447466    2536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:52:15.952193 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:52:15.962440 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:52:15.962511 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:52:15.990421 1437114 cri.go:89] found id: ""
	I1209 05:52:15.990444 1437114 logs.go:282] 0 containers: []
	W1209 05:52:15.990452 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:52:15.990459 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:52:15.990527 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:52:16.025731 1437114 cri.go:89] found id: ""
	I1209 05:52:16.025759 1437114 logs.go:282] 0 containers: []
	W1209 05:52:16.025768 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:52:16.025775 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:52:16.025850 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:52:16.051150 1437114 cri.go:89] found id: ""
	I1209 05:52:16.051184 1437114 logs.go:282] 0 containers: []
	W1209 05:52:16.051193 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:52:16.051199 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:52:16.051269 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:52:16.080315 1437114 cri.go:89] found id: ""
	I1209 05:52:16.080343 1437114 logs.go:282] 0 containers: []
	W1209 05:52:16.080352 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:52:16.080358 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:52:16.080421 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:52:16.106254 1437114 cri.go:89] found id: ""
	I1209 05:52:16.106329 1437114 logs.go:282] 0 containers: []
	W1209 05:52:16.106344 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:52:16.106351 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:52:16.106419 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:52:16.130691 1437114 cri.go:89] found id: ""
	I1209 05:52:16.130717 1437114 logs.go:282] 0 containers: []
	W1209 05:52:16.130726 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:52:16.130732 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:52:16.130788 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:52:16.156232 1437114 cri.go:89] found id: ""
	I1209 05:52:16.156257 1437114 logs.go:282] 0 containers: []
	W1209 05:52:16.156266 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:52:16.156272 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:52:16.156333 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:52:16.186070 1437114 cri.go:89] found id: ""
	I1209 05:52:16.186091 1437114 logs.go:282] 0 containers: []
	W1209 05:52:16.186100 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:52:16.186109 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:52:16.186121 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:52:16.203551 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:52:16.203579 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:52:16.280037 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:52:16.272128    2631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:16.272800    2631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:16.274272    2631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:16.274686    2631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:16.276185    2631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:52:16.272128    2631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:16.272800    2631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:16.274272    2631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:16.274686    2631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:16.276185    2631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:52:16.280087 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:52:16.280102 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:52:16.304445 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:52:16.304479 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:52:16.333574 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:52:16.333599 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:52:18.890807 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:52:18.901129 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:52:18.901207 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:52:18.925553 1437114 cri.go:89] found id: ""
	I1209 05:52:18.925576 1437114 logs.go:282] 0 containers: []
	W1209 05:52:18.925584 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:52:18.925590 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:52:18.925648 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:52:18.951104 1437114 cri.go:89] found id: ""
	I1209 05:52:18.951180 1437114 logs.go:282] 0 containers: []
	W1209 05:52:18.951203 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:52:18.951221 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:52:18.951309 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:52:18.975343 1437114 cri.go:89] found id: ""
	I1209 05:52:18.975407 1437114 logs.go:282] 0 containers: []
	W1209 05:52:18.975432 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:52:18.975450 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:52:18.975535 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:52:18.999522 1437114 cri.go:89] found id: ""
	I1209 05:52:18.999596 1437114 logs.go:282] 0 containers: []
	W1209 05:52:18.999619 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:52:18.999637 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:52:18.999722 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:52:19.025106 1437114 cri.go:89] found id: ""
	I1209 05:52:19.025181 1437114 logs.go:282] 0 containers: []
	W1209 05:52:19.025203 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:52:19.025221 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:52:19.025307 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:52:19.047867 1437114 cri.go:89] found id: ""
	I1209 05:52:19.047944 1437114 logs.go:282] 0 containers: []
	W1209 05:52:19.047966 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:52:19.048006 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:52:19.048106 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:52:19.071487 1437114 cri.go:89] found id: ""
	I1209 05:52:19.071511 1437114 logs.go:282] 0 containers: []
	W1209 05:52:19.071519 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:52:19.071526 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:52:19.071585 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:52:19.096506 1437114 cri.go:89] found id: ""
	I1209 05:52:19.096531 1437114 logs.go:282] 0 containers: []
	W1209 05:52:19.096540 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:52:19.096549 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:52:19.096595 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:52:19.111961 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:52:19.112001 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:52:19.184448 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:52:19.173564    2740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:19.174163    2740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:19.175662    2740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:19.176275    2740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:19.178917    2740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:52:19.173564    2740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:19.174163    2740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:19.175662    2740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:19.176275    2740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:19.178917    2740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:52:19.184473 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:52:19.184487 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:52:19.213109 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:52:19.213148 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:52:19.242001 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:52:19.242036 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:52:21.800441 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:52:21.810634 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:52:21.810706 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:52:21.835147 1437114 cri.go:89] found id: ""
	I1209 05:52:21.835171 1437114 logs.go:282] 0 containers: []
	W1209 05:52:21.835180 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:52:21.835186 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:52:21.835244 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:52:21.863735 1437114 cri.go:89] found id: ""
	I1209 05:52:21.863760 1437114 logs.go:282] 0 containers: []
	W1209 05:52:21.863769 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:52:21.863775 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:52:21.863833 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:52:21.887643 1437114 cri.go:89] found id: ""
	I1209 05:52:21.887667 1437114 logs.go:282] 0 containers: []
	W1209 05:52:21.887676 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:52:21.887682 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:52:21.887738 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:52:21.912358 1437114 cri.go:89] found id: ""
	I1209 05:52:21.912384 1437114 logs.go:282] 0 containers: []
	W1209 05:52:21.912392 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:52:21.912399 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:52:21.912458 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:52:21.941394 1437114 cri.go:89] found id: ""
	I1209 05:52:21.941420 1437114 logs.go:282] 0 containers: []
	W1209 05:52:21.941429 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:52:21.941435 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:52:21.941521 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:52:21.945768 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 05:52:21.973669 1437114 cri.go:89] found id: ""
	I1209 05:52:21.973703 1437114 logs.go:282] 0 containers: []
	W1209 05:52:21.973712 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:52:21.973734 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:52:21.973814 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	W1209 05:52:22.028092 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:52:22.028115 1437114 cri.go:89] found id: ""
	I1209 05:52:22.028247 1437114 logs.go:282] 0 containers: []
	W1209 05:52:22.028256 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:52:22.028268 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	W1209 05:52:22.028296 1437114 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1209 05:52:22.028335 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:52:22.054827 1437114 cri.go:89] found id: ""
	I1209 05:52:22.054854 1437114 logs.go:282] 0 containers: []
	W1209 05:52:22.054862 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:52:22.054871 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:52:22.054883 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:52:22.081941 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:52:22.081985 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:52:22.109801 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:52:22.109829 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:52:22.167418 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:52:22.167455 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:52:22.186947 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:52:22.187039 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:52:22.274107 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:52:22.265076    2876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:22.265712    2876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:22.267349    2876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:22.267990    2876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:22.269553    2876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:52:22.265076    2876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:22.265712    2876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:22.267349    2876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:22.267990    2876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:22.269553    2876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:52:24.774371 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:52:24.785291 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:52:24.785383 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:52:24.810496 1437114 cri.go:89] found id: ""
	I1209 05:52:24.810521 1437114 logs.go:282] 0 containers: []
	W1209 05:52:24.810530 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:52:24.810537 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:52:24.810641 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:52:24.840246 1437114 cri.go:89] found id: ""
	I1209 05:52:24.840283 1437114 logs.go:282] 0 containers: []
	W1209 05:52:24.840292 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:52:24.840298 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:52:24.840383 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:52:24.866227 1437114 cri.go:89] found id: ""
	I1209 05:52:24.866252 1437114 logs.go:282] 0 containers: []
	W1209 05:52:24.866267 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:52:24.866274 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:52:24.866334 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:52:24.894487 1437114 cri.go:89] found id: ""
	I1209 05:52:24.894512 1437114 logs.go:282] 0 containers: []
	W1209 05:52:24.894521 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:52:24.894528 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:52:24.894592 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:52:24.919081 1437114 cri.go:89] found id: ""
	I1209 05:52:24.919106 1437114 logs.go:282] 0 containers: []
	W1209 05:52:24.919115 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:52:24.919122 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:52:24.919182 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:52:24.942639 1437114 cri.go:89] found id: ""
	I1209 05:52:24.942664 1437114 logs.go:282] 0 containers: []
	W1209 05:52:24.942673 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:52:24.942679 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:52:24.942736 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:52:24.966811 1437114 cri.go:89] found id: ""
	I1209 05:52:24.966835 1437114 logs.go:282] 0 containers: []
	W1209 05:52:24.966844 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:52:24.966849 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:52:24.966906 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:52:24.990491 1437114 cri.go:89] found id: ""
	I1209 05:52:24.990515 1437114 logs.go:282] 0 containers: []
	W1209 05:52:24.990524 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:52:24.990533 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:52:24.990544 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:52:25.049211 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:52:25.049244 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:52:25.065441 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:52:25.065469 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:52:25.128713 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:52:25.120700    2971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:25.121283    2971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:25.122776    2971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:25.123296    2971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:25.124752    2971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:52:25.120700    2971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:25.121283    2971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:25.122776    2971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:25.123296    2971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:25.124752    2971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:52:25.128735 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:52:25.128750 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:52:25.154485 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:52:25.154518 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:52:27.686448 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:52:27.697271 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:52:27.697388 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:52:27.723850 1437114 cri.go:89] found id: ""
	I1209 05:52:27.723930 1437114 logs.go:282] 0 containers: []
	W1209 05:52:27.723953 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:52:27.723970 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:52:27.724082 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:52:27.749864 1437114 cri.go:89] found id: ""
	I1209 05:52:27.749889 1437114 logs.go:282] 0 containers: []
	W1209 05:52:27.749897 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:52:27.749904 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:52:27.749989 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:52:27.773124 1437114 cri.go:89] found id: ""
	I1209 05:52:27.773151 1437114 logs.go:282] 0 containers: []
	W1209 05:52:27.773167 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:52:27.773174 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:52:27.773238 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:52:27.802090 1437114 cri.go:89] found id: ""
	I1209 05:52:27.802118 1437114 logs.go:282] 0 containers: []
	W1209 05:52:27.802128 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:52:27.802134 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:52:27.802193 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:52:27.827324 1437114 cri.go:89] found id: ""
	I1209 05:52:27.827349 1437114 logs.go:282] 0 containers: []
	W1209 05:52:27.827361 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:52:27.827367 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:52:27.827425 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:52:27.855877 1437114 cri.go:89] found id: ""
	I1209 05:52:27.855905 1437114 logs.go:282] 0 containers: []
	W1209 05:52:27.855914 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:52:27.855920 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:52:27.855980 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:52:27.880242 1437114 cri.go:89] found id: ""
	I1209 05:52:27.880322 1437114 logs.go:282] 0 containers: []
	W1209 05:52:27.880346 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:52:27.880365 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:52:27.880457 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:52:27.903986 1437114 cri.go:89] found id: ""
	I1209 05:52:27.904032 1437114 logs.go:282] 0 containers: []
	W1209 05:52:27.904041 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:52:27.904079 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:52:27.904100 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:52:27.937811 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:52:27.937838 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:52:27.993533 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:52:27.993570 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:52:28.010780 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:52:28.010818 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:52:28.075391 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:52:28.066786    3096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:28.067667    3096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:28.069423    3096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:28.069776    3096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:28.071145    3096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:52:28.066786    3096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:28.067667    3096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:28.069423    3096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:28.069776    3096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:28.071145    3096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:52:28.075424 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:52:28.075454 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:52:30.602097 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:52:30.612434 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:52:30.612508 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:52:30.638153 1437114 cri.go:89] found id: ""
	I1209 05:52:30.638183 1437114 logs.go:282] 0 containers: []
	W1209 05:52:30.638191 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:52:30.638197 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:52:30.638280 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:52:30.664120 1437114 cri.go:89] found id: ""
	I1209 05:52:30.664206 1437114 logs.go:282] 0 containers: []
	W1209 05:52:30.664221 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:52:30.664229 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:52:30.664291 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:52:30.695098 1437114 cri.go:89] found id: ""
	I1209 05:52:30.695124 1437114 logs.go:282] 0 containers: []
	W1209 05:52:30.695132 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:52:30.695138 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:52:30.695196 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:52:30.728679 1437114 cri.go:89] found id: ""
	I1209 05:52:30.728703 1437114 logs.go:282] 0 containers: []
	W1209 05:52:30.728711 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:52:30.728718 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:52:30.728777 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:52:30.757085 1437114 cri.go:89] found id: ""
	I1209 05:52:30.757108 1437114 logs.go:282] 0 containers: []
	W1209 05:52:30.757116 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:52:30.757122 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:52:30.757190 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:52:30.781813 1437114 cri.go:89] found id: ""
	I1209 05:52:30.781838 1437114 logs.go:282] 0 containers: []
	W1209 05:52:30.781847 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:52:30.781853 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:52:30.781931 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:52:30.805893 1437114 cri.go:89] found id: ""
	I1209 05:52:30.805958 1437114 logs.go:282] 0 containers: []
	W1209 05:52:30.805972 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:52:30.805980 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:52:30.806045 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:52:30.838632 1437114 cri.go:89] found id: ""
	I1209 05:52:30.838657 1437114 logs.go:282] 0 containers: []
	W1209 05:52:30.838666 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:52:30.838675 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:52:30.838686 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:52:30.853978 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:52:30.854004 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:52:30.918110 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:52:30.910818    3197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:30.911400    3197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:30.912432    3197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:30.912927    3197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:30.914407    3197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:52:30.910818    3197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:30.911400    3197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:30.912432    3197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:30.912927    3197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:30.914407    3197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:52:30.918132 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:52:30.918144 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:52:30.943105 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:52:30.943142 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:52:30.969706 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:52:30.969735 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:52:33.525286 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:52:33.535730 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:52:33.535803 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:52:33.559344 1437114 cri.go:89] found id: ""
	I1209 05:52:33.559369 1437114 logs.go:282] 0 containers: []
	W1209 05:52:33.559378 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:52:33.559384 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:52:33.559441 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:52:33.588185 1437114 cri.go:89] found id: ""
	I1209 05:52:33.588254 1437114 logs.go:282] 0 containers: []
	W1209 05:52:33.588278 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:52:33.588292 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:52:33.588366 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:52:33.613255 1437114 cri.go:89] found id: ""
	I1209 05:52:33.613279 1437114 logs.go:282] 0 containers: []
	W1209 05:52:33.613288 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:52:33.613295 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:52:33.613382 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:52:33.636919 1437114 cri.go:89] found id: ""
	I1209 05:52:33.636953 1437114 logs.go:282] 0 containers: []
	W1209 05:52:33.636961 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:52:33.636968 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:52:33.637035 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:52:33.666309 1437114 cri.go:89] found id: ""
	I1209 05:52:33.666342 1437114 logs.go:282] 0 containers: []
	W1209 05:52:33.666351 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:52:33.666358 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:52:33.666424 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:52:33.698208 1437114 cri.go:89] found id: ""
	I1209 05:52:33.698283 1437114 logs.go:282] 0 containers: []
	W1209 05:52:33.698305 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:52:33.698324 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:52:33.698413 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:52:33.730383 1437114 cri.go:89] found id: ""
	I1209 05:52:33.730456 1437114 logs.go:282] 0 containers: []
	W1209 05:52:33.730479 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:52:33.730499 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:52:33.730585 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:52:33.759854 1437114 cri.go:89] found id: ""
	I1209 05:52:33.759930 1437114 logs.go:282] 0 containers: []
	W1209 05:52:33.759952 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:52:33.759972 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:52:33.760007 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:52:33.822572 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:52:33.815081    3306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:33.815468    3306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:33.816948    3306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:33.817250    3306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:33.818729    3306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:52:33.815081    3306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:33.815468    3306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:33.816948    3306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:33.817250    3306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:33.818729    3306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:52:33.822593 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:52:33.822606 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:52:33.848713 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:52:33.848751 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:52:33.875169 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:52:33.875202 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:52:33.929863 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:52:33.929899 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:52:36.446655 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:52:36.457494 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:52:36.457564 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:52:36.489953 1437114 cri.go:89] found id: ""
	I1209 05:52:36.490015 1437114 logs.go:282] 0 containers: []
	W1209 05:52:36.490045 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:52:36.490069 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:52:36.490171 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:52:36.518208 1437114 cri.go:89] found id: ""
	I1209 05:52:36.518232 1437114 logs.go:282] 0 containers: []
	W1209 05:52:36.518240 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:52:36.518246 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:52:36.518303 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:52:36.546757 1437114 cri.go:89] found id: ""
	I1209 05:52:36.546830 1437114 logs.go:282] 0 containers: []
	W1209 05:52:36.546852 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:52:36.546870 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:52:36.546958 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:52:36.573478 1437114 cri.go:89] found id: ""
	I1209 05:52:36.573504 1437114 logs.go:282] 0 containers: []
	W1209 05:52:36.573512 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:52:36.573518 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:52:36.573573 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:52:36.597359 1437114 cri.go:89] found id: ""
	I1209 05:52:36.597384 1437114 logs.go:282] 0 containers: []
	W1209 05:52:36.597392 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:52:36.597399 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:52:36.597456 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:52:36.626723 1437114 cri.go:89] found id: ""
	I1209 05:52:36.626750 1437114 logs.go:282] 0 containers: []
	W1209 05:52:36.626758 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:52:36.626765 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:52:36.626821 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:52:36.651878 1437114 cri.go:89] found id: ""
	I1209 05:52:36.651904 1437114 logs.go:282] 0 containers: []
	W1209 05:52:36.651913 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:52:36.651920 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:52:36.651983 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:52:36.677687 1437114 cri.go:89] found id: ""
	I1209 05:52:36.677763 1437114 logs.go:282] 0 containers: []
	W1209 05:52:36.677786 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:52:36.677806 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:52:36.677844 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:52:36.762388 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:52:36.754574    3419 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:36.755265    3419 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:36.756812    3419 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:36.757117    3419 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:36.758563    3419 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:52:36.754574    3419 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:36.755265    3419 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:36.756812    3419 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:36.757117    3419 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:36.758563    3419 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:52:36.762408 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:52:36.762421 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:52:36.787210 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:52:36.787245 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:52:36.813523 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:52:36.813549 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:52:36.871098 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:52:36.871134 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:52:39.145660 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1209 05:52:39.203856 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 05:52:39.203957 1437114 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1209 05:52:39.388175 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:52:39.398492 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:52:39.398583 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:52:39.425881 1437114 cri.go:89] found id: ""
	I1209 05:52:39.425914 1437114 logs.go:282] 0 containers: []
	W1209 05:52:39.425924 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:52:39.425930 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:52:39.425998 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:52:39.450356 1437114 cri.go:89] found id: ""
	I1209 05:52:39.450390 1437114 logs.go:282] 0 containers: []
	W1209 05:52:39.450399 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:52:39.450405 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:52:39.450472 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:52:39.482441 1437114 cri.go:89] found id: ""
	I1209 05:52:39.482475 1437114 logs.go:282] 0 containers: []
	W1209 05:52:39.482483 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:52:39.482490 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:52:39.482554 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:52:39.512577 1437114 cri.go:89] found id: ""
	I1209 05:52:39.512602 1437114 logs.go:282] 0 containers: []
	W1209 05:52:39.512611 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:52:39.512617 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:52:39.512674 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:52:39.537514 1437114 cri.go:89] found id: ""
	I1209 05:52:39.537539 1437114 logs.go:282] 0 containers: []
	W1209 05:52:39.537547 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:52:39.537559 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:52:39.537620 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:52:39.561319 1437114 cri.go:89] found id: ""
	I1209 05:52:39.561352 1437114 logs.go:282] 0 containers: []
	W1209 05:52:39.561360 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:52:39.561366 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:52:39.561442 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:52:39.589300 1437114 cri.go:89] found id: ""
	I1209 05:52:39.589324 1437114 logs.go:282] 0 containers: []
	W1209 05:52:39.589333 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:52:39.589339 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:52:39.589398 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:52:39.620288 1437114 cri.go:89] found id: ""
	I1209 05:52:39.620312 1437114 logs.go:282] 0 containers: []
	W1209 05:52:39.620321 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:52:39.620339 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:52:39.620351 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:52:39.678215 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:52:39.678293 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:52:39.697337 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:52:39.697364 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:52:39.767115 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:52:39.758981    3543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:39.759384    3543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:39.761296    3543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:39.761699    3543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:39.763232    3543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:52:39.758981    3543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:39.759384    3543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:39.761296    3543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:39.761699    3543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:39.763232    3543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:52:39.767135 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:52:39.767147 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:52:39.791949 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:52:39.791985 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:52:42.324195 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:52:42.339508 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:52:42.339591 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:52:42.370155 1437114 cri.go:89] found id: ""
	I1209 05:52:42.370181 1437114 logs.go:282] 0 containers: []
	W1209 05:52:42.370192 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:52:42.370199 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:52:42.370268 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:52:42.395020 1437114 cri.go:89] found id: ""
	I1209 05:52:42.395054 1437114 logs.go:282] 0 containers: []
	W1209 05:52:42.395063 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:52:42.395069 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:52:42.395136 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:52:42.423571 1437114 cri.go:89] found id: ""
	I1209 05:52:42.423604 1437114 logs.go:282] 0 containers: []
	W1209 05:52:42.423612 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:52:42.423618 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:52:42.423684 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:52:42.449744 1437114 cri.go:89] found id: ""
	I1209 05:52:42.449821 1437114 logs.go:282] 0 containers: []
	W1209 05:52:42.449846 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:52:42.449865 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:52:42.449951 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:52:42.476838 1437114 cri.go:89] found id: ""
	I1209 05:52:42.476864 1437114 logs.go:282] 0 containers: []
	W1209 05:52:42.476872 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:52:42.476879 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:52:42.476957 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:52:42.505251 1437114 cri.go:89] found id: ""
	I1209 05:52:42.505278 1437114 logs.go:282] 0 containers: []
	W1209 05:52:42.505287 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:52:42.505294 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:52:42.505372 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:52:42.529646 1437114 cri.go:89] found id: ""
	I1209 05:52:42.529712 1437114 logs.go:282] 0 containers: []
	W1209 05:52:42.529728 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:52:42.529741 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:52:42.529803 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:52:42.553792 1437114 cri.go:89] found id: ""
	I1209 05:52:42.553818 1437114 logs.go:282] 0 containers: []
	W1209 05:52:42.553827 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:52:42.553836 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:52:42.553865 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:52:42.610712 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:52:42.610750 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:52:42.626470 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:52:42.626498 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:52:42.691633 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:52:42.681192    3655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:42.683916    3655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:42.685453    3655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:42.685744    3655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:42.687188    3655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:52:42.681192    3655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:42.683916    3655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:42.685453    3655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:42.685744    3655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:42.687188    3655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:52:42.691658 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:52:42.691672 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:52:42.721023 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:52:42.721056 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:52:45.257072 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:52:45.279876 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:52:45.279970 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:52:45.310797 1437114 cri.go:89] found id: ""
	I1209 05:52:45.310822 1437114 logs.go:282] 0 containers: []
	W1209 05:52:45.310831 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:52:45.310837 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:52:45.310915 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:52:45.339967 1437114 cri.go:89] found id: ""
	I1209 05:52:45.339990 1437114 logs.go:282] 0 containers: []
	W1209 05:52:45.339999 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:52:45.340004 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:52:45.340083 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:52:45.368323 1437114 cri.go:89] found id: ""
	I1209 05:52:45.368351 1437114 logs.go:282] 0 containers: []
	W1209 05:52:45.368360 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:52:45.368368 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:52:45.368427 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:52:45.393892 1437114 cri.go:89] found id: ""
	I1209 05:52:45.393918 1437114 logs.go:282] 0 containers: []
	W1209 05:52:45.393926 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:52:45.393932 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:52:45.393995 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:52:45.418992 1437114 cri.go:89] found id: ""
	I1209 05:52:45.419025 1437114 logs.go:282] 0 containers: []
	W1209 05:52:45.419035 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:52:45.419041 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:52:45.419107 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:52:45.461356 1437114 cri.go:89] found id: ""
	I1209 05:52:45.461392 1437114 logs.go:282] 0 containers: []
	W1209 05:52:45.461401 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:52:45.461407 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:52:45.461481 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:52:45.493718 1437114 cri.go:89] found id: ""
	I1209 05:52:45.493753 1437114 logs.go:282] 0 containers: []
	W1209 05:52:45.493762 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:52:45.493768 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:52:45.493836 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:52:45.517850 1437114 cri.go:89] found id: ""
	I1209 05:52:45.517876 1437114 logs.go:282] 0 containers: []
	W1209 05:52:45.517898 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:52:45.517907 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:52:45.517922 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:52:45.576699 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:52:45.576736 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:52:45.592339 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:52:45.592368 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:52:45.660368 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:52:45.651938    3766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:45.652711    3766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:45.654414    3766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:45.654934    3766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:45.656559    3766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:52:45.651938    3766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:45.652711    3766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:45.654414    3766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:45.654934    3766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:45.656559    3766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:52:45.660391 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:52:45.660404 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:52:45.687142 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:52:45.687222 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:52:48.227261 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:52:48.237593 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:52:48.237680 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:52:48.260468 1437114 cri.go:89] found id: ""
	I1209 05:52:48.260493 1437114 logs.go:282] 0 containers: []
	W1209 05:52:48.260502 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:52:48.260509 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:52:48.260570 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:52:48.289034 1437114 cri.go:89] found id: ""
	I1209 05:52:48.289059 1437114 logs.go:282] 0 containers: []
	W1209 05:52:48.289068 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:52:48.289074 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:52:48.289150 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:52:48.316323 1437114 cri.go:89] found id: ""
	I1209 05:52:48.316349 1437114 logs.go:282] 0 containers: []
	W1209 05:52:48.316358 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:52:48.316364 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:52:48.316434 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:52:48.342218 1437114 cri.go:89] found id: ""
	I1209 05:52:48.342240 1437114 logs.go:282] 0 containers: []
	W1209 05:52:48.342249 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:52:48.342255 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:52:48.342308 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:52:48.371363 1437114 cri.go:89] found id: ""
	I1209 05:52:48.371390 1437114 logs.go:282] 0 containers: []
	W1209 05:52:48.371399 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:52:48.371406 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:52:48.371466 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:52:48.395178 1437114 cri.go:89] found id: ""
	I1209 05:52:48.395204 1437114 logs.go:282] 0 containers: []
	W1209 05:52:48.395212 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:52:48.395218 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:52:48.395274 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:52:48.419670 1437114 cri.go:89] found id: ""
	I1209 05:52:48.419709 1437114 logs.go:282] 0 containers: []
	W1209 05:52:48.419718 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:52:48.419740 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:52:48.419825 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:52:48.461924 1437114 cri.go:89] found id: ""
	I1209 05:52:48.461946 1437114 logs.go:282] 0 containers: []
	W1209 05:52:48.461954 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:52:48.461963 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:52:48.461974 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:52:48.528889 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:52:48.528926 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:52:48.544946 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:52:48.544976 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:52:48.610447 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:52:48.602428    3879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:48.603193    3879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:48.604673    3879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:48.605169    3879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:48.606641    3879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:52:48.602428    3879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:48.603193    3879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:48.604673    3879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:48.605169    3879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:48.606641    3879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:52:48.610466 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:52:48.610478 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:52:48.636193 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:52:48.636232 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:52:49.474531 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1209 05:52:49.539382 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 05:52:49.539481 1437114 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1209 05:52:49.543501 1437114 out.go:179] * Enabled addons: 
	I1209 05:52:49.546285 1437114 addons.go:530] duration metric: took 1m54.169473068s for enable addons: enabled=[]
	I1209 05:52:51.163525 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:52:51.174339 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:52:51.174465 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:52:51.198800 1437114 cri.go:89] found id: ""
	I1209 05:52:51.198828 1437114 logs.go:282] 0 containers: []
	W1209 05:52:51.198837 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:52:51.198843 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:52:51.198901 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:52:51.224524 1437114 cri.go:89] found id: ""
	I1209 05:52:51.224552 1437114 logs.go:282] 0 containers: []
	W1209 05:52:51.224561 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:52:51.224568 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:52:51.224626 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:52:51.249032 1437114 cri.go:89] found id: ""
	I1209 05:52:51.249099 1437114 logs.go:282] 0 containers: []
	W1209 05:52:51.249122 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:52:51.249136 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:52:51.249210 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:52:51.272901 1437114 cri.go:89] found id: ""
	I1209 05:52:51.272929 1437114 logs.go:282] 0 containers: []
	W1209 05:52:51.272937 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:52:51.272950 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:52:51.273011 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:52:51.296909 1437114 cri.go:89] found id: ""
	I1209 05:52:51.296935 1437114 logs.go:282] 0 containers: []
	W1209 05:52:51.296943 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:52:51.296949 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:52:51.297007 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:52:51.325419 1437114 cri.go:89] found id: ""
	I1209 05:52:51.325499 1437114 logs.go:282] 0 containers: []
	W1209 05:52:51.325522 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:52:51.325537 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:52:51.325609 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:52:51.350449 1437114 cri.go:89] found id: ""
	I1209 05:52:51.350475 1437114 logs.go:282] 0 containers: []
	W1209 05:52:51.350484 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:52:51.350490 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:52:51.350571 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:52:51.378459 1437114 cri.go:89] found id: ""
	I1209 05:52:51.378482 1437114 logs.go:282] 0 containers: []
	W1209 05:52:51.378490 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:52:51.378501 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:52:51.378512 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:52:51.439032 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:52:51.439075 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:52:51.457325 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:52:51.457355 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:52:51.525486 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:52:51.517693    3998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:51.518243    3998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:51.519766    3998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:51.520306    3998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:51.521762    3998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:52:51.517693    3998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:51.518243    3998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:51.519766    3998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:51.520306    3998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:51.521762    3998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:52:51.525549 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:52:51.525570 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:52:51.551425 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:52:51.551463 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:52:54.078624 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:52:54.089324 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:52:54.089395 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:52:54.117819 1437114 cri.go:89] found id: ""
	I1209 05:52:54.117840 1437114 logs.go:282] 0 containers: []
	W1209 05:52:54.117856 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:52:54.117863 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:52:54.117923 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:52:54.143006 1437114 cri.go:89] found id: ""
	I1209 05:52:54.143083 1437114 logs.go:282] 0 containers: []
	W1209 05:52:54.143105 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:52:54.143125 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:52:54.143200 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:52:54.168655 1437114 cri.go:89] found id: ""
	I1209 05:52:54.168715 1437114 logs.go:282] 0 containers: []
	W1209 05:52:54.168742 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:52:54.168758 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:52:54.168847 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:52:54.193433 1437114 cri.go:89] found id: ""
	I1209 05:52:54.193459 1437114 logs.go:282] 0 containers: []
	W1209 05:52:54.193467 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:52:54.193474 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:52:54.193558 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:52:54.216587 1437114 cri.go:89] found id: ""
	I1209 05:52:54.216663 1437114 logs.go:282] 0 containers: []
	W1209 05:52:54.216686 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:52:54.216700 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:52:54.216775 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:52:54.240686 1437114 cri.go:89] found id: ""
	I1209 05:52:54.240723 1437114 logs.go:282] 0 containers: []
	W1209 05:52:54.240732 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:52:54.240739 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:52:54.240830 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:52:54.264680 1437114 cri.go:89] found id: ""
	I1209 05:52:54.264710 1437114 logs.go:282] 0 containers: []
	W1209 05:52:54.264719 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:52:54.264725 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:52:54.264785 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:52:54.288715 1437114 cri.go:89] found id: ""
	I1209 05:52:54.288739 1437114 logs.go:282] 0 containers: []
	W1209 05:52:54.288748 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:52:54.288757 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:52:54.288769 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:52:54.344591 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:52:54.344629 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:52:54.360275 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:52:54.360350 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:52:54.422057 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:52:54.413842    4106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:54.414541    4106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:54.416178    4106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:54.416655    4106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:54.418204    4106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:52:54.413842    4106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:54.414541    4106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:54.416178    4106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:54.416655    4106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:54.418204    4106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:52:54.422081 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:52:54.422093 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:52:54.451978 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:52:54.452157 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:52:56.987228 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:52:56.997370 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:52:56.997440 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:52:57.026856 1437114 cri.go:89] found id: ""
	I1209 05:52:57.026878 1437114 logs.go:282] 0 containers: []
	W1209 05:52:57.026886 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:52:57.026893 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:52:57.026955 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:52:57.052417 1437114 cri.go:89] found id: ""
	I1209 05:52:57.052442 1437114 logs.go:282] 0 containers: []
	W1209 05:52:57.052450 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:52:57.052457 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:52:57.052517 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:52:57.079492 1437114 cri.go:89] found id: ""
	I1209 05:52:57.079516 1437114 logs.go:282] 0 containers: []
	W1209 05:52:57.079526 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:52:57.079532 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:52:57.079590 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:52:57.103111 1437114 cri.go:89] found id: ""
	I1209 05:52:57.103135 1437114 logs.go:282] 0 containers: []
	W1209 05:52:57.103144 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:52:57.103150 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:52:57.103212 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:52:57.129591 1437114 cri.go:89] found id: ""
	I1209 05:52:57.129616 1437114 logs.go:282] 0 containers: []
	W1209 05:52:57.129624 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:52:57.129631 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:52:57.129706 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:52:57.153092 1437114 cri.go:89] found id: ""
	I1209 05:52:57.153115 1437114 logs.go:282] 0 containers: []
	W1209 05:52:57.153124 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:52:57.153131 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:52:57.153189 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:52:57.177623 1437114 cri.go:89] found id: ""
	I1209 05:52:57.177647 1437114 logs.go:282] 0 containers: []
	W1209 05:52:57.177656 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:52:57.177662 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:52:57.177748 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:52:57.202469 1437114 cri.go:89] found id: ""
	I1209 05:52:57.202493 1437114 logs.go:282] 0 containers: []
	W1209 05:52:57.202502 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:52:57.202511 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:52:57.202550 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:52:57.260356 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:52:57.260393 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:52:57.276459 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:52:57.276539 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:52:57.343015 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:52:57.335090    4217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:57.335845    4217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:57.337423    4217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:57.337717    4217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:57.339202    4217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:52:57.335090    4217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:57.335845    4217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:57.337423    4217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:57.337717    4217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:57.339202    4217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:52:57.343037 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:52:57.343052 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:52:57.368448 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:52:57.368485 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:52:59.899132 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:52:59.909390 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:52:59.909502 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:52:59.942228 1437114 cri.go:89] found id: ""
	I1209 05:52:59.942299 1437114 logs.go:282] 0 containers: []
	W1209 05:52:59.942333 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:52:59.942354 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:52:59.942464 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:52:59.967993 1437114 cri.go:89] found id: ""
	I1209 05:52:59.968090 1437114 logs.go:282] 0 containers: []
	W1209 05:52:59.968105 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:52:59.968112 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:52:59.968183 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:53:00.004409 1437114 cri.go:89] found id: ""
	I1209 05:53:00.004444 1437114 logs.go:282] 0 containers: []
	W1209 05:53:00.004453 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:53:00.004461 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:53:00.004542 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:53:00.122181 1437114 cri.go:89] found id: ""
	I1209 05:53:00.122206 1437114 logs.go:282] 0 containers: []
	W1209 05:53:00.122216 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:53:00.122238 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:53:00.122319 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:53:00.178386 1437114 cri.go:89] found id: ""
	I1209 05:53:00.178469 1437114 logs.go:282] 0 containers: []
	W1209 05:53:00.178481 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:53:00.178488 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:53:00.178720 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:53:00.226314 1437114 cri.go:89] found id: ""
	I1209 05:53:00.226451 1437114 logs.go:282] 0 containers: []
	W1209 05:53:00.226477 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:53:00.226486 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:53:00.226568 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:53:00.271734 1437114 cri.go:89] found id: ""
	I1209 05:53:00.271771 1437114 logs.go:282] 0 containers: []
	W1209 05:53:00.271782 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:53:00.271790 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:53:00.271932 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:53:00.335362 1437114 cri.go:89] found id: ""
	I1209 05:53:00.335448 1437114 logs.go:282] 0 containers: []
	W1209 05:53:00.335466 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:53:00.335477 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:53:00.335493 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:53:00.365642 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:53:00.365684 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:53:00.400318 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:53:00.400349 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:53:00.462709 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:53:00.462752 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:53:00.480156 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:53:00.480188 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:53:00.548948 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:53:00.540982    4347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:00.541655    4347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:00.543286    4347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:00.543662    4347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:00.545115    4347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:53:00.540982    4347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:00.541655    4347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:00.543286    4347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:00.543662    4347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:00.545115    4347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:53:03.050610 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:53:03.061297 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:53:03.061406 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:53:03.090201 1437114 cri.go:89] found id: ""
	I1209 05:53:03.090232 1437114 logs.go:282] 0 containers: []
	W1209 05:53:03.090240 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:53:03.090248 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:53:03.090313 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:53:03.115399 1437114 cri.go:89] found id: ""
	I1209 05:53:03.115424 1437114 logs.go:282] 0 containers: []
	W1209 05:53:03.115432 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:53:03.115438 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:53:03.115497 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:53:03.138652 1437114 cri.go:89] found id: ""
	I1209 05:53:03.138685 1437114 logs.go:282] 0 containers: []
	W1209 05:53:03.138694 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:53:03.138700 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:53:03.138771 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:53:03.163354 1437114 cri.go:89] found id: ""
	I1209 05:53:03.163387 1437114 logs.go:282] 0 containers: []
	W1209 05:53:03.163396 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:53:03.163402 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:53:03.163467 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:53:03.189982 1437114 cri.go:89] found id: ""
	I1209 05:53:03.190008 1437114 logs.go:282] 0 containers: []
	W1209 05:53:03.190016 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:53:03.190023 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:53:03.190100 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:53:03.214072 1437114 cri.go:89] found id: ""
	I1209 05:53:03.214100 1437114 logs.go:282] 0 containers: []
	W1209 05:53:03.214109 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:53:03.214115 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:53:03.214193 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:53:03.238571 1437114 cri.go:89] found id: ""
	I1209 05:53:03.238605 1437114 logs.go:282] 0 containers: []
	W1209 05:53:03.238614 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:53:03.238620 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:53:03.238713 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:53:03.262760 1437114 cri.go:89] found id: ""
	I1209 05:53:03.262791 1437114 logs.go:282] 0 containers: []
	W1209 05:53:03.262800 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:53:03.262825 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:53:03.262848 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:53:03.278402 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:53:03.278430 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:53:03.340382 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:53:03.332086    4439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:03.332485    4439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:03.334108    4439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:03.334685    4439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:03.336430    4439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:53:03.332086    4439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:03.332485    4439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:03.334108    4439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:03.334685    4439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:03.336430    4439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:53:03.340405 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:53:03.340420 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:53:03.367157 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:53:03.367193 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:53:03.394767 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:53:03.394794 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:53:05.953212 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:53:05.965657 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:53:05.965739 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:53:06.020272 1437114 cri.go:89] found id: ""
	I1209 05:53:06.020296 1437114 logs.go:282] 0 containers: []
	W1209 05:53:06.020305 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:53:06.020311 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:53:06.020379 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:53:06.045735 1437114 cri.go:89] found id: ""
	I1209 05:53:06.045757 1437114 logs.go:282] 0 containers: []
	W1209 05:53:06.045766 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:53:06.045772 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:53:06.045832 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:53:06.072090 1437114 cri.go:89] found id: ""
	I1209 05:53:06.072119 1437114 logs.go:282] 0 containers: []
	W1209 05:53:06.072129 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:53:06.072136 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:53:06.072225 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:53:06.097096 1437114 cri.go:89] found id: ""
	I1209 05:53:06.097121 1437114 logs.go:282] 0 containers: []
	W1209 05:53:06.097130 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:53:06.097137 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:53:06.097214 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:53:06.121406 1437114 cri.go:89] found id: ""
	I1209 05:53:06.121431 1437114 logs.go:282] 0 containers: []
	W1209 05:53:06.121439 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:53:06.121446 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:53:06.121503 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:53:06.146550 1437114 cri.go:89] found id: ""
	I1209 05:53:06.146585 1437114 logs.go:282] 0 containers: []
	W1209 05:53:06.146594 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:53:06.146601 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:53:06.146667 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:53:06.173744 1437114 cri.go:89] found id: ""
	I1209 05:53:06.173779 1437114 logs.go:282] 0 containers: []
	W1209 05:53:06.173788 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:53:06.173794 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:53:06.173852 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:53:06.196867 1437114 cri.go:89] found id: ""
	I1209 05:53:06.196892 1437114 logs.go:282] 0 containers: []
	W1209 05:53:06.196901 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:53:06.196911 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:53:06.196922 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:53:06.252507 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:53:06.252544 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:53:06.268558 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:53:06.268588 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:53:06.335400 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:53:06.327269    4553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:06.327995    4553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:06.329562    4553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:06.330075    4553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:06.331590    4553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:53:06.327269    4553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:06.327995    4553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:06.329562    4553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:06.330075    4553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:06.331590    4553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:53:06.335432 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:53:06.335445 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:53:06.361277 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:53:06.361311 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:53:08.892899 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:53:08.903128 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:53:08.903197 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:53:08.927271 1437114 cri.go:89] found id: ""
	I1209 05:53:08.927347 1437114 logs.go:282] 0 containers: []
	W1209 05:53:08.927363 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:53:08.927371 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:53:08.927437 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:53:08.958272 1437114 cri.go:89] found id: ""
	I1209 05:53:08.958296 1437114 logs.go:282] 0 containers: []
	W1209 05:53:08.958305 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:53:08.958312 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:53:08.958389 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:53:08.992109 1437114 cri.go:89] found id: ""
	I1209 05:53:08.992174 1437114 logs.go:282] 0 containers: []
	W1209 05:53:08.992196 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:53:08.992217 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:53:08.992284 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:53:09.021977 1437114 cri.go:89] found id: ""
	I1209 05:53:09.022053 1437114 logs.go:282] 0 containers: []
	W1209 05:53:09.022069 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:53:09.022076 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:53:09.022135 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:53:09.045707 1437114 cri.go:89] found id: ""
	I1209 05:53:09.045731 1437114 logs.go:282] 0 containers: []
	W1209 05:53:09.045739 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:53:09.045745 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:53:09.045801 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:53:09.070070 1437114 cri.go:89] found id: ""
	I1209 05:53:09.070103 1437114 logs.go:282] 0 containers: []
	W1209 05:53:09.070112 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:53:09.070118 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:53:09.070186 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:53:09.094488 1437114 cri.go:89] found id: ""
	I1209 05:53:09.094513 1437114 logs.go:282] 0 containers: []
	W1209 05:53:09.094530 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:53:09.094537 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:53:09.094606 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:53:09.118093 1437114 cri.go:89] found id: ""
	I1209 05:53:09.118132 1437114 logs.go:282] 0 containers: []
	W1209 05:53:09.118141 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:53:09.118150 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:53:09.118161 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:53:09.179308 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:53:09.171279    4659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:09.171791    4659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:09.173320    4659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:09.173784    4659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:09.175502    4659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:53:09.171279    4659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:09.171791    4659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:09.173320    4659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:09.173784    4659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:09.175502    4659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:53:09.179376 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:53:09.179404 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:53:09.204829 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:53:09.204867 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:53:09.232053 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:53:09.232131 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:53:09.292412 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:53:09.292453 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:53:11.810473 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:53:11.820642 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:53:11.820731 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:53:11.844911 1437114 cri.go:89] found id: ""
	I1209 05:53:11.844935 1437114 logs.go:282] 0 containers: []
	W1209 05:53:11.844944 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:53:11.844951 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:53:11.845057 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:53:11.868554 1437114 cri.go:89] found id: ""
	I1209 05:53:11.868628 1437114 logs.go:282] 0 containers: []
	W1209 05:53:11.868642 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:53:11.868649 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:53:11.868713 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:53:11.893204 1437114 cri.go:89] found id: ""
	I1209 05:53:11.893229 1437114 logs.go:282] 0 containers: []
	W1209 05:53:11.893237 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:53:11.893243 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:53:11.893307 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:53:11.922205 1437114 cri.go:89] found id: ""
	I1209 05:53:11.922235 1437114 logs.go:282] 0 containers: []
	W1209 05:53:11.922244 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:53:11.922250 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:53:11.922314 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:53:11.969099 1437114 cri.go:89] found id: ""
	I1209 05:53:11.969172 1437114 logs.go:282] 0 containers: []
	W1209 05:53:11.969195 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:53:11.969222 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:53:11.969335 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:53:11.999668 1437114 cri.go:89] found id: ""
	I1209 05:53:11.999694 1437114 logs.go:282] 0 containers: []
	W1209 05:53:11.999702 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:53:11.999709 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:53:11.999798 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:53:12.027989 1437114 cri.go:89] found id: ""
	I1209 05:53:12.028053 1437114 logs.go:282] 0 containers: []
	W1209 05:53:12.028062 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:53:12.028083 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:53:12.028182 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:53:12.060174 1437114 cri.go:89] found id: ""
	I1209 05:53:12.060202 1437114 logs.go:282] 0 containers: []
	W1209 05:53:12.060211 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:53:12.060220 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:53:12.060260 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:53:12.121282 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:53:12.121323 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:53:12.137566 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:53:12.137595 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:53:12.205667 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:53:12.197778    4774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:12.198341    4774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:12.199791    4774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:12.200371    4774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:12.201936    4774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:53:12.197778    4774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:12.198341    4774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:12.199791    4774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:12.200371    4774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:12.201936    4774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:53:12.205687 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:53:12.205700 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:53:12.230499 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:53:12.230532 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:53:14.761775 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:53:14.772764 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:53:14.772836 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:53:14.796366 1437114 cri.go:89] found id: ""
	I1209 05:53:14.796391 1437114 logs.go:282] 0 containers: []
	W1209 05:53:14.796399 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:53:14.796406 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:53:14.796479 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:53:14.821766 1437114 cri.go:89] found id: ""
	I1209 05:53:14.821793 1437114 logs.go:282] 0 containers: []
	W1209 05:53:14.821802 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:53:14.821808 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:53:14.821868 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:53:14.846798 1437114 cri.go:89] found id: ""
	I1209 05:53:14.846823 1437114 logs.go:282] 0 containers: []
	W1209 05:53:14.846832 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:53:14.846838 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:53:14.846896 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:53:14.870638 1437114 cri.go:89] found id: ""
	I1209 05:53:14.870668 1437114 logs.go:282] 0 containers: []
	W1209 05:53:14.870677 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:53:14.870683 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:53:14.870741 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:53:14.894543 1437114 cri.go:89] found id: ""
	I1209 05:53:14.894571 1437114 logs.go:282] 0 containers: []
	W1209 05:53:14.894580 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:53:14.894586 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:53:14.894650 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:53:14.918572 1437114 cri.go:89] found id: ""
	I1209 05:53:14.918601 1437114 logs.go:282] 0 containers: []
	W1209 05:53:14.918610 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:53:14.918617 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:53:14.918699 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:53:14.947884 1437114 cri.go:89] found id: ""
	I1209 05:53:14.947914 1437114 logs.go:282] 0 containers: []
	W1209 05:53:14.947922 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:53:14.947928 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:53:14.948004 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:53:14.989982 1437114 cri.go:89] found id: ""
	I1209 05:53:14.990055 1437114 logs.go:282] 0 containers: []
	W1209 05:53:14.990078 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:53:14.990099 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:53:14.990137 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:53:15.012208 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:53:15.012307 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:53:15.086674 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:53:15.078145    4882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:15.078866    4882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:15.080581    4882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:15.081087    4882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:15.082649    4882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:53:15.078145    4882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:15.078866    4882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:15.080581    4882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:15.081087    4882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:15.082649    4882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:53:15.086740 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:53:15.086766 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:53:15.112587 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:53:15.112623 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:53:15.141472 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:53:15.141502 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:53:17.701838 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:53:17.713895 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:53:17.713963 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:53:17.745334 1437114 cri.go:89] found id: ""
	I1209 05:53:17.745357 1437114 logs.go:282] 0 containers: []
	W1209 05:53:17.745366 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:53:17.745372 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:53:17.745470 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:53:17.770153 1437114 cri.go:89] found id: ""
	I1209 05:53:17.770220 1437114 logs.go:282] 0 containers: []
	W1209 05:53:17.770244 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:53:17.770263 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:53:17.770326 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:53:17.795244 1437114 cri.go:89] found id: ""
	I1209 05:53:17.795278 1437114 logs.go:282] 0 containers: []
	W1209 05:53:17.795287 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:53:17.795293 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:53:17.795388 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:53:17.822017 1437114 cri.go:89] found id: ""
	I1209 05:53:17.822040 1437114 logs.go:282] 0 containers: []
	W1209 05:53:17.822049 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:53:17.822055 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:53:17.822132 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:53:17.850510 1437114 cri.go:89] found id: ""
	I1209 05:53:17.850532 1437114 logs.go:282] 0 containers: []
	W1209 05:53:17.850541 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:53:17.850566 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:53:17.850624 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:53:17.875231 1437114 cri.go:89] found id: ""
	I1209 05:53:17.875314 1437114 logs.go:282] 0 containers: []
	W1209 05:53:17.875337 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:53:17.875359 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:53:17.875488 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:53:17.901146 1437114 cri.go:89] found id: ""
	I1209 05:53:17.901169 1437114 logs.go:282] 0 containers: []
	W1209 05:53:17.901178 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:53:17.901207 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:53:17.901291 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:53:17.924362 1437114 cri.go:89] found id: ""
	I1209 05:53:17.924386 1437114 logs.go:282] 0 containers: []
	W1209 05:53:17.924395 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:53:17.924404 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:53:17.924415 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:53:17.987361 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:53:17.987403 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:53:18.004290 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:53:18.004323 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:53:18.072148 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:53:18.062877    5000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:18.063667    5000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:18.065532    5000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:18.066146    5000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:18.067899    5000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:53:18.062877    5000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:18.063667    5000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:18.065532    5000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:18.066146    5000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:18.067899    5000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:53:18.072181 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:53:18.072194 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:53:18.098033 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:53:18.098071 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:53:20.625561 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:53:20.635963 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:53:20.636053 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:53:20.659961 1437114 cri.go:89] found id: ""
	I1209 05:53:20.659984 1437114 logs.go:282] 0 containers: []
	W1209 05:53:20.659994 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:53:20.660000 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:53:20.660075 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:53:20.690085 1437114 cri.go:89] found id: ""
	I1209 05:53:20.690119 1437114 logs.go:282] 0 containers: []
	W1209 05:53:20.690128 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:53:20.690134 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:53:20.690199 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:53:20.722202 1437114 cri.go:89] found id: ""
	I1209 05:53:20.722238 1437114 logs.go:282] 0 containers: []
	W1209 05:53:20.722247 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:53:20.722254 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:53:20.722319 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:53:20.754033 1437114 cri.go:89] found id: ""
	I1209 05:53:20.754057 1437114 logs.go:282] 0 containers: []
	W1209 05:53:20.754066 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:53:20.754073 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:53:20.754157 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:53:20.778306 1437114 cri.go:89] found id: ""
	I1209 05:53:20.778332 1437114 logs.go:282] 0 containers: []
	W1209 05:53:20.778341 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:53:20.778349 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:53:20.778427 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:53:20.802477 1437114 cri.go:89] found id: ""
	I1209 05:53:20.802501 1437114 logs.go:282] 0 containers: []
	W1209 05:53:20.802510 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:53:20.802516 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:53:20.802605 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:53:20.833205 1437114 cri.go:89] found id: ""
	I1209 05:53:20.833231 1437114 logs.go:282] 0 containers: []
	W1209 05:53:20.833239 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:53:20.833246 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:53:20.833310 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:53:20.858107 1437114 cri.go:89] found id: ""
	I1209 05:53:20.858172 1437114 logs.go:282] 0 containers: []
	W1209 05:53:20.858188 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:53:20.858198 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:53:20.858209 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:53:20.914050 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:53:20.914088 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:53:20.930297 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:53:20.930326 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:53:21.009735 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:53:20.998811    5112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:20.999637    5112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:21.001322    5112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:21.001871    5112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:21.003770    5112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:53:20.998811    5112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:20.999637    5112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:21.001322    5112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:21.001871    5112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:21.003770    5112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:53:21.009759 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:53:21.009772 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:53:21.035653 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:53:21.035687 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:53:23.563248 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:53:23.574010 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:53:23.574087 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:53:23.603557 1437114 cri.go:89] found id: ""
	I1209 05:53:23.603583 1437114 logs.go:282] 0 containers: []
	W1209 05:53:23.603593 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:53:23.603599 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:53:23.603658 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:53:23.629927 1437114 cri.go:89] found id: ""
	I1209 05:53:23.629953 1437114 logs.go:282] 0 containers: []
	W1209 05:53:23.629961 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:53:23.629967 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:53:23.630029 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:53:23.654017 1437114 cri.go:89] found id: ""
	I1209 05:53:23.654042 1437114 logs.go:282] 0 containers: []
	W1209 05:53:23.654050 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:53:23.654057 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:53:23.654114 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:53:23.681104 1437114 cri.go:89] found id: ""
	I1209 05:53:23.681126 1437114 logs.go:282] 0 containers: []
	W1209 05:53:23.681134 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:53:23.681140 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:53:23.681210 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:53:23.717733 1437114 cri.go:89] found id: ""
	I1209 05:53:23.717754 1437114 logs.go:282] 0 containers: []
	W1209 05:53:23.717763 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:53:23.717769 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:53:23.717826 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:53:23.746697 1437114 cri.go:89] found id: ""
	I1209 05:53:23.746718 1437114 logs.go:282] 0 containers: []
	W1209 05:53:23.746727 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:53:23.746734 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:53:23.746791 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:53:23.771013 1437114 cri.go:89] found id: ""
	I1209 05:53:23.771035 1437114 logs.go:282] 0 containers: []
	W1209 05:53:23.771043 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:53:23.771049 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:53:23.771110 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:53:23.797671 1437114 cri.go:89] found id: ""
	I1209 05:53:23.797695 1437114 logs.go:282] 0 containers: []
	W1209 05:53:23.797705 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:53:23.797714 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:53:23.797727 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:53:23.863004 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:53:23.854866    5216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:23.855647    5216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:23.857241    5216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:23.857752    5216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:23.859306    5216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:53:23.854866    5216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:23.855647    5216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:23.857241    5216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:23.857752    5216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:23.859306    5216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:53:23.863025 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:53:23.863039 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:53:23.888849 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:53:23.888886 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:53:23.918103 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:53:23.918129 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:53:23.981103 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:53:23.981139 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:53:26.502565 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:53:26.513114 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:53:26.513204 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:53:26.536286 1437114 cri.go:89] found id: ""
	I1209 05:53:26.536352 1437114 logs.go:282] 0 containers: []
	W1209 05:53:26.536366 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:53:26.536373 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:53:26.536448 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:53:26.567137 1437114 cri.go:89] found id: ""
	I1209 05:53:26.567165 1437114 logs.go:282] 0 containers: []
	W1209 05:53:26.567174 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:53:26.567181 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:53:26.567255 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:53:26.593992 1437114 cri.go:89] found id: ""
	I1209 05:53:26.594018 1437114 logs.go:282] 0 containers: []
	W1209 05:53:26.594027 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:53:26.594033 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:53:26.594112 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:53:26.622318 1437114 cri.go:89] found id: ""
	I1209 05:53:26.622341 1437114 logs.go:282] 0 containers: []
	W1209 05:53:26.622349 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:53:26.622356 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:53:26.622436 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:53:26.647615 1437114 cri.go:89] found id: ""
	I1209 05:53:26.647689 1437114 logs.go:282] 0 containers: []
	W1209 05:53:26.647724 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:53:26.647744 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:53:26.647837 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:53:26.672100 1437114 cri.go:89] found id: ""
	I1209 05:53:26.672174 1437114 logs.go:282] 0 containers: []
	W1209 05:53:26.672189 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:53:26.672197 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:53:26.672268 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:53:26.702289 1437114 cri.go:89] found id: ""
	I1209 05:53:26.702322 1437114 logs.go:282] 0 containers: []
	W1209 05:53:26.702331 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:53:26.702355 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:53:26.702438 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:53:26.732737 1437114 cri.go:89] found id: ""
	I1209 05:53:26.732807 1437114 logs.go:282] 0 containers: []
	W1209 05:53:26.732831 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:53:26.732855 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:53:26.732894 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:53:26.749702 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:53:26.749778 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:53:26.813476 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:53:26.805499    5331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:26.805968    5331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:26.807499    5331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:26.807884    5331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:26.809521    5331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:53:26.805499    5331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:26.805968    5331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:26.807499    5331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:26.807884    5331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:26.809521    5331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:53:26.813510 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:53:26.813524 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:53:26.839545 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:53:26.839583 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:53:26.866441 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:53:26.866469 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:53:29.424166 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:53:29.435921 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:53:29.435993 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:53:29.462038 1437114 cri.go:89] found id: ""
	I1209 05:53:29.462060 1437114 logs.go:282] 0 containers: []
	W1209 05:53:29.462068 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:53:29.462074 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:53:29.462134 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:53:29.485671 1437114 cri.go:89] found id: ""
	I1209 05:53:29.485695 1437114 logs.go:282] 0 containers: []
	W1209 05:53:29.485704 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:53:29.485710 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:53:29.485765 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:53:29.508799 1437114 cri.go:89] found id: ""
	I1209 05:53:29.508829 1437114 logs.go:282] 0 containers: []
	W1209 05:53:29.508838 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:53:29.508844 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:53:29.508910 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:53:29.533027 1437114 cri.go:89] found id: ""
	I1209 05:53:29.533052 1437114 logs.go:282] 0 containers: []
	W1209 05:53:29.533060 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:53:29.533066 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:53:29.533151 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:53:29.565784 1437114 cri.go:89] found id: ""
	I1209 05:53:29.565811 1437114 logs.go:282] 0 containers: []
	W1209 05:53:29.565819 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:53:29.565825 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:53:29.565882 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:53:29.590917 1437114 cri.go:89] found id: ""
	I1209 05:53:29.590943 1437114 logs.go:282] 0 containers: []
	W1209 05:53:29.590951 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:53:29.590957 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:53:29.591014 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:53:29.618282 1437114 cri.go:89] found id: ""
	I1209 05:53:29.618307 1437114 logs.go:282] 0 containers: []
	W1209 05:53:29.618316 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:53:29.618322 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:53:29.618381 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:53:29.646902 1437114 cri.go:89] found id: ""
	I1209 05:53:29.646936 1437114 logs.go:282] 0 containers: []
	W1209 05:53:29.646946 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:53:29.646955 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:53:29.646973 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:53:29.707743 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:53:29.707828 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:53:29.724421 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:53:29.724499 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:53:29.794074 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:53:29.785906    5445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:29.786405    5445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:29.787873    5445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:29.788573    5445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:29.790227    5445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:53:29.785906    5445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:29.786405    5445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:29.787873    5445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:29.788573    5445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:29.790227    5445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:53:29.794139 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:53:29.794180 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:53:29.820222 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:53:29.820259 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:53:32.350724 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:53:32.361228 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:53:32.361300 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:53:32.389541 1437114 cri.go:89] found id: ""
	I1209 05:53:32.389564 1437114 logs.go:282] 0 containers: []
	W1209 05:53:32.389572 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:53:32.389578 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:53:32.389637 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:53:32.412985 1437114 cri.go:89] found id: ""
	I1209 05:53:32.413008 1437114 logs.go:282] 0 containers: []
	W1209 05:53:32.413017 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:53:32.413023 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:53:32.413100 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:53:32.436603 1437114 cri.go:89] found id: ""
	I1209 05:53:32.436628 1437114 logs.go:282] 0 containers: []
	W1209 05:53:32.436637 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:53:32.436644 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:53:32.436703 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:53:32.461975 1437114 cri.go:89] found id: ""
	I1209 05:53:32.462039 1437114 logs.go:282] 0 containers: []
	W1209 05:53:32.462053 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:53:32.462060 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:53:32.462122 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:53:32.485536 1437114 cri.go:89] found id: ""
	I1209 05:53:32.485560 1437114 logs.go:282] 0 containers: []
	W1209 05:53:32.485568 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:53:32.485574 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:53:32.485633 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:53:32.509130 1437114 cri.go:89] found id: ""
	I1209 05:53:32.509159 1437114 logs.go:282] 0 containers: []
	W1209 05:53:32.509168 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:53:32.509175 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:53:32.509253 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:53:32.532336 1437114 cri.go:89] found id: ""
	I1209 05:53:32.532366 1437114 logs.go:282] 0 containers: []
	W1209 05:53:32.532374 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:53:32.532381 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:53:32.532465 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:53:32.556282 1437114 cri.go:89] found id: ""
	I1209 05:53:32.556319 1437114 logs.go:282] 0 containers: []
	W1209 05:53:32.556329 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:53:32.556338 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:53:32.556352 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:53:32.572109 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:53:32.572183 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:53:32.633108 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:53:32.624780    5553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:32.625448    5553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:32.627074    5553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:32.627615    5553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:32.629220    5553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:53:32.624780    5553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:32.625448    5553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:32.627074    5553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:32.627615    5553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:32.629220    5553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:53:32.633141 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:53:32.633155 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:53:32.662184 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:53:32.662225 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:53:32.702034 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:53:32.702063 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:53:35.266899 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:53:35.277229 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:53:35.277296 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:53:35.300790 1437114 cri.go:89] found id: ""
	I1209 05:53:35.300814 1437114 logs.go:282] 0 containers: []
	W1209 05:53:35.300823 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:53:35.300830 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:53:35.300892 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:53:35.325182 1437114 cri.go:89] found id: ""
	I1209 05:53:35.325204 1437114 logs.go:282] 0 containers: []
	W1209 05:53:35.325212 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:53:35.325218 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:53:35.325280 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:53:35.353701 1437114 cri.go:89] found id: ""
	I1209 05:53:35.353727 1437114 logs.go:282] 0 containers: []
	W1209 05:53:35.353735 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:53:35.353741 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:53:35.353802 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:53:35.377248 1437114 cri.go:89] found id: ""
	I1209 05:53:35.377272 1437114 logs.go:282] 0 containers: []
	W1209 05:53:35.377281 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:53:35.377288 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:53:35.377347 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:53:35.401542 1437114 cri.go:89] found id: ""
	I1209 05:53:35.401568 1437114 logs.go:282] 0 containers: []
	W1209 05:53:35.401577 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:53:35.401584 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:53:35.401663 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:53:35.426460 1437114 cri.go:89] found id: ""
	I1209 05:53:35.426488 1437114 logs.go:282] 0 containers: []
	W1209 05:53:35.426497 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:53:35.426503 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:53:35.426561 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:53:35.454120 1437114 cri.go:89] found id: ""
	I1209 05:53:35.454145 1437114 logs.go:282] 0 containers: []
	W1209 05:53:35.454154 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:53:35.454160 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:53:35.454217 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:53:35.478639 1437114 cri.go:89] found id: ""
	I1209 05:53:35.478664 1437114 logs.go:282] 0 containers: []
	W1209 05:53:35.478673 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:53:35.478681 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:53:35.478692 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:53:35.504448 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:53:35.504487 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:53:35.533724 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:53:35.533751 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:53:35.589526 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:53:35.589560 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:53:35.605319 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:53:35.605345 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:53:35.676318 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:53:35.668651    5679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:35.669162    5679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:35.670613    5679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:35.671063    5679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:35.672483    5679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:53:35.668651    5679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:35.669162    5679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:35.670613    5679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:35.671063    5679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:35.672483    5679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:53:38.177618 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:53:38.191936 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:53:38.192007 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:53:38.225078 1437114 cri.go:89] found id: ""
	I1209 05:53:38.225117 1437114 logs.go:282] 0 containers: []
	W1209 05:53:38.225126 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:53:38.225133 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:53:38.225204 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:53:38.257246 1437114 cri.go:89] found id: ""
	I1209 05:53:38.257272 1437114 logs.go:282] 0 containers: []
	W1209 05:53:38.257281 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:53:38.257286 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:53:38.257350 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:53:38.286060 1437114 cri.go:89] found id: ""
	I1209 05:53:38.286083 1437114 logs.go:282] 0 containers: []
	W1209 05:53:38.286091 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:53:38.286097 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:53:38.286158 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:53:38.315924 1437114 cri.go:89] found id: ""
	I1209 05:53:38.315989 1437114 logs.go:282] 0 containers: []
	W1209 05:53:38.316050 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:53:38.316081 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:53:38.316148 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:53:38.340319 1437114 cri.go:89] found id: ""
	I1209 05:53:38.340348 1437114 logs.go:282] 0 containers: []
	W1209 05:53:38.340357 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:53:38.340363 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:53:38.340424 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:53:38.365184 1437114 cri.go:89] found id: ""
	I1209 05:53:38.365220 1437114 logs.go:282] 0 containers: []
	W1209 05:53:38.365229 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:53:38.365235 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:53:38.365307 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:53:38.389641 1437114 cri.go:89] found id: ""
	I1209 05:53:38.389720 1437114 logs.go:282] 0 containers: []
	W1209 05:53:38.389744 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:53:38.389759 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:53:38.389832 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:53:38.420280 1437114 cri.go:89] found id: ""
	I1209 05:53:38.420306 1437114 logs.go:282] 0 containers: []
	W1209 05:53:38.420315 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:53:38.420324 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:53:38.420353 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:53:38.476252 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:53:38.476288 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:53:38.492393 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:53:38.492472 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:53:38.557826 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:53:38.549594    5780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:38.550283    5780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:38.551905    5780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:38.552451    5780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:38.553989    5780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:53:38.549594    5780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:38.550283    5780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:38.551905    5780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:38.552451    5780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:38.553989    5780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:53:38.557849 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:53:38.557862 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:53:38.583171 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:53:38.583206 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:53:41.110406 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:53:41.120474 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:53:41.120545 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:53:41.145006 1437114 cri.go:89] found id: ""
	I1209 05:53:41.145030 1437114 logs.go:282] 0 containers: []
	W1209 05:53:41.145038 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:53:41.145044 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:53:41.145100 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:53:41.168892 1437114 cri.go:89] found id: ""
	I1209 05:53:41.168917 1437114 logs.go:282] 0 containers: []
	W1209 05:53:41.168925 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:53:41.168932 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:53:41.168989 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:53:41.206601 1437114 cri.go:89] found id: ""
	I1209 05:53:41.206630 1437114 logs.go:282] 0 containers: []
	W1209 05:53:41.206641 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:53:41.206653 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:53:41.206721 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:53:41.247172 1437114 cri.go:89] found id: ""
	I1209 05:53:41.247204 1437114 logs.go:282] 0 containers: []
	W1209 05:53:41.247212 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:53:41.247219 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:53:41.247276 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:53:41.271589 1437114 cri.go:89] found id: ""
	I1209 05:53:41.271613 1437114 logs.go:282] 0 containers: []
	W1209 05:53:41.271621 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:53:41.271628 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:53:41.271714 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:53:41.298007 1437114 cri.go:89] found id: ""
	I1209 05:53:41.298032 1437114 logs.go:282] 0 containers: []
	W1209 05:53:41.298041 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:53:41.298047 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:53:41.298105 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:53:41.325987 1437114 cri.go:89] found id: ""
	I1209 05:53:41.326010 1437114 logs.go:282] 0 containers: []
	W1209 05:53:41.326025 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:53:41.326050 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:53:41.326131 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:53:41.351424 1437114 cri.go:89] found id: ""
	I1209 05:53:41.351449 1437114 logs.go:282] 0 containers: []
	W1209 05:53:41.351457 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:53:41.351466 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:53:41.351476 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:53:41.376872 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:53:41.376906 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:53:41.405296 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:53:41.405322 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:53:41.461131 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:53:41.461167 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:53:41.477891 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:53:41.477920 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:53:41.546568 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:53:41.537827    5907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:41.538521    5907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:41.540212    5907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:41.540814    5907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:41.542724    5907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:53:41.537827    5907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:41.538521    5907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:41.540212    5907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:41.540814    5907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:41.542724    5907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:53:44.046855 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:53:44.058136 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:53:44.058209 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:53:44.086287 1437114 cri.go:89] found id: ""
	I1209 05:53:44.086311 1437114 logs.go:282] 0 containers: []
	W1209 05:53:44.086320 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:53:44.086326 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:53:44.086390 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:53:44.110388 1437114 cri.go:89] found id: ""
	I1209 05:53:44.110411 1437114 logs.go:282] 0 containers: []
	W1209 05:53:44.110419 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:53:44.110425 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:53:44.110481 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:53:44.134842 1437114 cri.go:89] found id: ""
	I1209 05:53:44.134864 1437114 logs.go:282] 0 containers: []
	W1209 05:53:44.134873 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:53:44.134879 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:53:44.134936 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:53:44.161691 1437114 cri.go:89] found id: ""
	I1209 05:53:44.161716 1437114 logs.go:282] 0 containers: []
	W1209 05:53:44.161725 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:53:44.161732 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:53:44.161789 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:53:44.195302 1437114 cri.go:89] found id: ""
	I1209 05:53:44.195326 1437114 logs.go:282] 0 containers: []
	W1209 05:53:44.195335 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:53:44.195341 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:53:44.195408 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:53:44.225882 1437114 cri.go:89] found id: ""
	I1209 05:53:44.225907 1437114 logs.go:282] 0 containers: []
	W1209 05:53:44.225916 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:53:44.225922 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:53:44.225981 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:53:44.253610 1437114 cri.go:89] found id: ""
	I1209 05:53:44.253636 1437114 logs.go:282] 0 containers: []
	W1209 05:53:44.253645 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:53:44.253655 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:53:44.253734 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:53:44.281815 1437114 cri.go:89] found id: ""
	I1209 05:53:44.281840 1437114 logs.go:282] 0 containers: []
	W1209 05:53:44.281848 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:53:44.281857 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:53:44.281868 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:53:44.339663 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:53:44.339702 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:53:44.355859 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:53:44.355938 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:53:44.429444 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:53:44.421835    6007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:44.422435    6007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:44.423949    6007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:44.424455    6007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:44.425745    6007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:53:44.421835    6007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:44.422435    6007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:44.423949    6007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:44.424455    6007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:44.425745    6007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:53:44.429466 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:53:44.429483 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:53:44.455230 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:53:44.455267 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:53:46.982212 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:53:46.993498 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:53:46.993587 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:53:47.023958 1437114 cri.go:89] found id: ""
	I1209 05:53:47.023982 1437114 logs.go:282] 0 containers: []
	W1209 05:53:47.023991 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:53:47.023997 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:53:47.024069 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:53:47.048879 1437114 cri.go:89] found id: ""
	I1209 05:53:47.048901 1437114 logs.go:282] 0 containers: []
	W1209 05:53:47.048910 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:53:47.048916 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:53:47.048983 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:53:47.073853 1437114 cri.go:89] found id: ""
	I1209 05:53:47.073878 1437114 logs.go:282] 0 containers: []
	W1209 05:53:47.073886 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:53:47.073894 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:53:47.073955 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:53:47.096844 1437114 cri.go:89] found id: ""
	I1209 05:53:47.096869 1437114 logs.go:282] 0 containers: []
	W1209 05:53:47.096877 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:53:47.096884 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:53:47.096945 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:53:47.120160 1437114 cri.go:89] found id: ""
	I1209 05:53:47.120185 1437114 logs.go:282] 0 containers: []
	W1209 05:53:47.120194 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:53:47.120200 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:53:47.120261 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:53:47.145073 1437114 cri.go:89] found id: ""
	I1209 05:53:47.145139 1437114 logs.go:282] 0 containers: []
	W1209 05:53:47.145155 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:53:47.145163 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:53:47.145226 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:53:47.168839 1437114 cri.go:89] found id: ""
	I1209 05:53:47.168862 1437114 logs.go:282] 0 containers: []
	W1209 05:53:47.168870 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:53:47.168878 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:53:47.168956 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:53:47.200241 1437114 cri.go:89] found id: ""
	I1209 05:53:47.200264 1437114 logs.go:282] 0 containers: []
	W1209 05:53:47.200272 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:53:47.200282 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:53:47.200311 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:53:47.261748 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:53:47.261783 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:53:47.277688 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:53:47.277718 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:53:47.342796 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:53:47.334710    6118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:47.335374    6118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:47.336895    6118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:47.337477    6118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:47.338953    6118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:53:47.334710    6118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:47.335374    6118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:47.336895    6118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:47.337477    6118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:47.338953    6118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:53:47.342859 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:53:47.342886 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:53:47.367837 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:53:47.367872 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:53:49.896241 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:53:49.908838 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:53:49.908918 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:53:49.942190 1437114 cri.go:89] found id: ""
	I1209 05:53:49.942212 1437114 logs.go:282] 0 containers: []
	W1209 05:53:49.942221 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:53:49.942226 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:53:49.942387 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:53:49.977371 1437114 cri.go:89] found id: ""
	I1209 05:53:49.977393 1437114 logs.go:282] 0 containers: []
	W1209 05:53:49.977401 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:53:49.977408 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:53:49.977468 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:53:50.002223 1437114 cri.go:89] found id: ""
	I1209 05:53:50.002247 1437114 logs.go:282] 0 containers: []
	W1209 05:53:50.002255 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:53:50.002262 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:53:50.002326 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:53:50.032431 1437114 cri.go:89] found id: ""
	I1209 05:53:50.032458 1437114 logs.go:282] 0 containers: []
	W1209 05:53:50.032467 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:53:50.032474 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:53:50.032535 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:53:50.062289 1437114 cri.go:89] found id: ""
	I1209 05:53:50.062314 1437114 logs.go:282] 0 containers: []
	W1209 05:53:50.062323 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:53:50.062329 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:53:50.062418 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:53:50.088271 1437114 cri.go:89] found id: ""
	I1209 05:53:50.088298 1437114 logs.go:282] 0 containers: []
	W1209 05:53:50.088307 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:53:50.088313 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:53:50.088382 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:53:50.114549 1437114 cri.go:89] found id: ""
	I1209 05:53:50.114629 1437114 logs.go:282] 0 containers: []
	W1209 05:53:50.115120 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:53:50.115137 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:53:50.115209 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:53:50.141196 1437114 cri.go:89] found id: ""
	I1209 05:53:50.141276 1437114 logs.go:282] 0 containers: []
	W1209 05:53:50.141298 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:53:50.141318 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:53:50.141353 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:53:50.198211 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:53:50.198284 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:53:50.215943 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:53:50.216047 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:53:50.281793 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:53:50.272885    6231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:50.273579    6231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:50.275295    6231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:50.275902    6231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:50.277606    6231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:53:50.272885    6231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:50.273579    6231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:50.275295    6231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:50.275902    6231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:50.277606    6231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:53:50.281814 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:53:50.281826 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:53:50.308006 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:53:50.308052 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:53:52.837556 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:53:52.848136 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:53:52.848208 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:53:52.872274 1437114 cri.go:89] found id: ""
	I1209 05:53:52.872302 1437114 logs.go:282] 0 containers: []
	W1209 05:53:52.872310 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:53:52.872317 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:53:52.872375 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:53:52.899101 1437114 cri.go:89] found id: ""
	I1209 05:53:52.899125 1437114 logs.go:282] 0 containers: []
	W1209 05:53:52.899134 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:53:52.899140 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:53:52.899199 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:53:52.926800 1437114 cri.go:89] found id: ""
	I1209 05:53:52.926825 1437114 logs.go:282] 0 containers: []
	W1209 05:53:52.926834 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:53:52.926840 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:53:52.926900 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:53:52.962012 1437114 cri.go:89] found id: ""
	I1209 05:53:52.962037 1437114 logs.go:282] 0 containers: []
	W1209 05:53:52.962055 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:53:52.962063 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:53:52.962140 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:53:52.996310 1437114 cri.go:89] found id: ""
	I1209 05:53:52.996336 1437114 logs.go:282] 0 containers: []
	W1209 05:53:52.996345 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:53:52.996351 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:53:52.996410 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:53:53.031535 1437114 cri.go:89] found id: ""
	I1209 05:53:53.031563 1437114 logs.go:282] 0 containers: []
	W1209 05:53:53.031572 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:53:53.031578 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:53:53.031637 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:53:53.059974 1437114 cri.go:89] found id: ""
	I1209 05:53:53.060004 1437114 logs.go:282] 0 containers: []
	W1209 05:53:53.060030 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:53:53.060038 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:53:53.060096 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:53:53.085290 1437114 cri.go:89] found id: ""
	I1209 05:53:53.085356 1437114 logs.go:282] 0 containers: []
	W1209 05:53:53.085386 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:53:53.085403 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:53:53.085415 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:53:53.142442 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:53:53.142477 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:53:53.159141 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:53:53.159169 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:53:53.237761 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:53:53.229474    6343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:53.230237    6343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:53.231874    6343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:53.232214    6343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:53.233652    6343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:53:53.229474    6343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:53.230237    6343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:53.231874    6343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:53.232214    6343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:53.233652    6343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:53:53.237779 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:53:53.237791 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:53:53.265602 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:53:53.265679 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:53:55.800068 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:53:55.810556 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:53:55.810627 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:53:55.836257 1437114 cri.go:89] found id: ""
	I1209 05:53:55.836280 1437114 logs.go:282] 0 containers: []
	W1209 05:53:55.836289 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:53:55.836295 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:53:55.836352 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:53:55.861759 1437114 cri.go:89] found id: ""
	I1209 05:53:55.861783 1437114 logs.go:282] 0 containers: []
	W1209 05:53:55.861792 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:53:55.861798 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:53:55.861865 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:53:55.886950 1437114 cri.go:89] found id: ""
	I1209 05:53:55.886982 1437114 logs.go:282] 0 containers: []
	W1209 05:53:55.886991 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:53:55.886997 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:53:55.887072 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:53:55.912055 1437114 cri.go:89] found id: ""
	I1209 05:53:55.912081 1437114 logs.go:282] 0 containers: []
	W1209 05:53:55.912089 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:53:55.912096 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:53:55.912162 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:53:55.949365 1437114 cri.go:89] found id: ""
	I1209 05:53:55.949431 1437114 logs.go:282] 0 containers: []
	W1209 05:53:55.949455 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:53:55.949471 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:53:55.949545 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:53:55.977916 1437114 cri.go:89] found id: ""
	I1209 05:53:55.977938 1437114 logs.go:282] 0 containers: []
	W1209 05:53:55.977946 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:53:55.977953 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:53:55.978040 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:53:56.013033 1437114 cri.go:89] found id: ""
	I1209 05:53:56.013070 1437114 logs.go:282] 0 containers: []
	W1209 05:53:56.013079 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:53:56.013086 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:53:56.013177 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:53:56.039563 1437114 cri.go:89] found id: ""
	I1209 05:53:56.039610 1437114 logs.go:282] 0 containers: []
	W1209 05:53:56.039620 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:53:56.039629 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:53:56.039641 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:53:56.065976 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:53:56.066014 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:53:56.097703 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:53:56.097732 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:53:56.156555 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:53:56.156594 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:53:56.172549 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:53:56.172576 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:53:56.257220 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:53:56.248866    6469 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:56.249574    6469 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:56.251225    6469 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:56.251719    6469 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:56.253344    6469 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:53:56.248866    6469 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:56.249574    6469 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:56.251225    6469 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:56.251719    6469 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:56.253344    6469 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:53:58.758071 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:53:58.768718 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:53:58.768796 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:53:58.793984 1437114 cri.go:89] found id: ""
	I1209 05:53:58.794007 1437114 logs.go:282] 0 containers: []
	W1209 05:53:58.794015 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:53:58.794021 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:53:58.794078 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:53:58.818550 1437114 cri.go:89] found id: ""
	I1209 05:53:58.818574 1437114 logs.go:282] 0 containers: []
	W1209 05:53:58.818582 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:53:58.818589 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:53:58.818648 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:53:58.843617 1437114 cri.go:89] found id: ""
	I1209 05:53:58.843696 1437114 logs.go:282] 0 containers: []
	W1209 05:53:58.843719 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:53:58.843738 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:53:58.843809 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:53:58.868732 1437114 cri.go:89] found id: ""
	I1209 05:53:58.868754 1437114 logs.go:282] 0 containers: []
	W1209 05:53:58.868763 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:53:58.868769 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:53:58.868823 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:53:58.892930 1437114 cri.go:89] found id: ""
	I1209 05:53:58.892953 1437114 logs.go:282] 0 containers: []
	W1209 05:53:58.892961 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:53:58.892968 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:53:58.893027 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:53:58.917833 1437114 cri.go:89] found id: ""
	I1209 05:53:58.917857 1437114 logs.go:282] 0 containers: []
	W1209 05:53:58.917865 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:53:58.917872 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:53:58.917933 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:53:58.965955 1437114 cri.go:89] found id: ""
	I1209 05:53:58.965982 1437114 logs.go:282] 0 containers: []
	W1209 05:53:58.965990 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:53:58.965996 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:53:58.966054 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:53:58.999708 1437114 cri.go:89] found id: ""
	I1209 05:53:58.999736 1437114 logs.go:282] 0 containers: []
	W1209 05:53:58.999744 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:53:58.999754 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:53:58.999764 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:53:59.065757 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:53:59.057189    6561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:59.058037    6561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:59.059660    6561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:59.060056    6561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:59.061679    6561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:53:59.057189    6561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:59.058037    6561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:59.059660    6561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:59.060056    6561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:59.061679    6561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:53:59.065776 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:53:59.065788 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:53:59.090908 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:53:59.090944 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:53:59.118148 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:53:59.118180 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:53:59.175439 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:53:59.175476 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:54:01.697656 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:54:01.712348 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:54:01.712424 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:54:01.743582 1437114 cri.go:89] found id: ""
	I1209 05:54:01.743609 1437114 logs.go:282] 0 containers: []
	W1209 05:54:01.743618 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:54:01.743625 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:54:01.743688 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:54:01.769801 1437114 cri.go:89] found id: ""
	I1209 05:54:01.769825 1437114 logs.go:282] 0 containers: []
	W1209 05:54:01.769834 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:54:01.769840 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:54:01.769896 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:54:01.798274 1437114 cri.go:89] found id: ""
	I1209 05:54:01.798299 1437114 logs.go:282] 0 containers: []
	W1209 05:54:01.798308 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:54:01.798314 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:54:01.798375 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:54:01.827182 1437114 cri.go:89] found id: ""
	I1209 05:54:01.827207 1437114 logs.go:282] 0 containers: []
	W1209 05:54:01.827215 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:54:01.827222 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:54:01.827284 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:54:01.856540 1437114 cri.go:89] found id: ""
	I1209 05:54:01.856564 1437114 logs.go:282] 0 containers: []
	W1209 05:54:01.856573 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:54:01.856579 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:54:01.856659 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:54:01.885694 1437114 cri.go:89] found id: ""
	I1209 05:54:01.885719 1437114 logs.go:282] 0 containers: []
	W1209 05:54:01.885728 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:54:01.885734 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:54:01.885808 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:54:01.915290 1437114 cri.go:89] found id: ""
	I1209 05:54:01.915318 1437114 logs.go:282] 0 containers: []
	W1209 05:54:01.915327 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:54:01.915333 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:54:01.915392 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:54:01.950840 1437114 cri.go:89] found id: ""
	I1209 05:54:01.950869 1437114 logs.go:282] 0 containers: []
	W1209 05:54:01.950878 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:54:01.950888 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:54:01.950899 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:54:02.014414 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:54:02.014453 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:54:02.032051 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:54:02.032135 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:54:02.095629 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:54:02.087393    6683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:02.088084    6683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:02.089580    6683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:02.090087    6683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:02.091647    6683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:54:02.087393    6683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:02.088084    6683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:02.089580    6683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:02.090087    6683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:02.091647    6683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:54:02.095650 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:54:02.095663 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:54:02.122511 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:54:02.122550 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:54:04.650297 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:54:04.660872 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:54:04.660943 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:54:04.687789 1437114 cri.go:89] found id: ""
	I1209 05:54:04.687819 1437114 logs.go:282] 0 containers: []
	W1209 05:54:04.687827 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:54:04.687833 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:54:04.687902 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:54:04.711324 1437114 cri.go:89] found id: ""
	I1209 05:54:04.711349 1437114 logs.go:282] 0 containers: []
	W1209 05:54:04.711357 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:54:04.711364 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:54:04.711423 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:54:04.737863 1437114 cri.go:89] found id: ""
	I1209 05:54:04.737888 1437114 logs.go:282] 0 containers: []
	W1209 05:54:04.737896 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:54:04.737902 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:54:04.737978 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:54:04.762117 1437114 cri.go:89] found id: ""
	I1209 05:54:04.762143 1437114 logs.go:282] 0 containers: []
	W1209 05:54:04.762153 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:54:04.762160 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:54:04.762242 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:54:04.786158 1437114 cri.go:89] found id: ""
	I1209 05:54:04.786181 1437114 logs.go:282] 0 containers: []
	W1209 05:54:04.786189 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:54:04.786195 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:54:04.786252 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:54:04.810657 1437114 cri.go:89] found id: ""
	I1209 05:54:04.810727 1437114 logs.go:282] 0 containers: []
	W1209 05:54:04.810758 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:54:04.810777 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:54:04.810865 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:54:04.835039 1437114 cri.go:89] found id: ""
	I1209 05:54:04.835061 1437114 logs.go:282] 0 containers: []
	W1209 05:54:04.835069 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:54:04.835075 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:54:04.835132 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:54:04.863664 1437114 cri.go:89] found id: ""
	I1209 05:54:04.863691 1437114 logs.go:282] 0 containers: []
	W1209 05:54:04.863704 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:54:04.863713 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:54:04.863724 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:54:04.889846 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:54:04.889882 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:54:04.919060 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:54:04.919086 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:54:04.995975 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:54:04.996070 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:54:05.020220 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:54:05.020254 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:54:05.088696 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:54:05.080290    6811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:05.080797    6811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:05.082535    6811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:05.082897    6811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:05.084443    6811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:54:05.080290    6811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:05.080797    6811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:05.082535    6811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:05.082897    6811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:05.084443    6811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:54:07.590606 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:54:07.601036 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:54:07.601107 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:54:07.626527 1437114 cri.go:89] found id: ""
	I1209 05:54:07.626550 1437114 logs.go:282] 0 containers: []
	W1209 05:54:07.626559 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:54:07.626566 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:54:07.626624 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:54:07.656166 1437114 cri.go:89] found id: ""
	I1209 05:54:07.656193 1437114 logs.go:282] 0 containers: []
	W1209 05:54:07.656201 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:54:07.656207 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:54:07.656272 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:54:07.682014 1437114 cri.go:89] found id: ""
	I1209 05:54:07.682038 1437114 logs.go:282] 0 containers: []
	W1209 05:54:07.682046 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:54:07.682052 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:54:07.682116 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:54:07.707210 1437114 cri.go:89] found id: ""
	I1209 05:54:07.707234 1437114 logs.go:282] 0 containers: []
	W1209 05:54:07.707242 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:54:07.707248 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:54:07.707332 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:54:07.731843 1437114 cri.go:89] found id: ""
	I1209 05:54:07.731868 1437114 logs.go:282] 0 containers: []
	W1209 05:54:07.731877 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:54:07.731892 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:54:07.731958 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:54:07.760321 1437114 cri.go:89] found id: ""
	I1209 05:54:07.760346 1437114 logs.go:282] 0 containers: []
	W1209 05:54:07.760354 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:54:07.760363 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:54:07.760424 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:54:07.786309 1437114 cri.go:89] found id: ""
	I1209 05:54:07.786330 1437114 logs.go:282] 0 containers: []
	W1209 05:54:07.786338 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:54:07.786350 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:54:07.786406 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:54:07.809182 1437114 cri.go:89] found id: ""
	I1209 05:54:07.809216 1437114 logs.go:282] 0 containers: []
	W1209 05:54:07.809225 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:54:07.809233 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:54:07.809244 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:54:07.839994 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:54:07.840050 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:54:07.898120 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:54:07.898152 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:54:07.914130 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:54:07.914234 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:54:08.009314 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:54:07.997479    6916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:07.998081    6916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:07.999634    6916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:08.000228    6916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:08.002087    6916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:54:07.997479    6916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:07.998081    6916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:07.999634    6916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:08.000228    6916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:08.002087    6916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:54:08.009391 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:54:08.009413 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:54:10.536185 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:54:10.547685 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:54:10.547757 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:54:10.571843 1437114 cri.go:89] found id: ""
	I1209 05:54:10.571865 1437114 logs.go:282] 0 containers: []
	W1209 05:54:10.571873 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:54:10.571879 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:54:10.571935 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:54:10.598065 1437114 cri.go:89] found id: ""
	I1209 05:54:10.598092 1437114 logs.go:282] 0 containers: []
	W1209 05:54:10.598101 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:54:10.598107 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:54:10.598165 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:54:10.623072 1437114 cri.go:89] found id: ""
	I1209 05:54:10.623098 1437114 logs.go:282] 0 containers: []
	W1209 05:54:10.623107 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:54:10.623113 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:54:10.623200 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:54:10.649781 1437114 cri.go:89] found id: ""
	I1209 05:54:10.649806 1437114 logs.go:282] 0 containers: []
	W1209 05:54:10.649823 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:54:10.649830 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:54:10.649886 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:54:10.677496 1437114 cri.go:89] found id: ""
	I1209 05:54:10.677529 1437114 logs.go:282] 0 containers: []
	W1209 05:54:10.677538 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:54:10.677544 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:54:10.677603 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:54:10.705951 1437114 cri.go:89] found id: ""
	I1209 05:54:10.705982 1437114 logs.go:282] 0 containers: []
	W1209 05:54:10.705991 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:54:10.705997 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:54:10.706062 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:54:10.730882 1437114 cri.go:89] found id: ""
	I1209 05:54:10.730957 1437114 logs.go:282] 0 containers: []
	W1209 05:54:10.730980 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:54:10.730998 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:54:10.731088 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:54:10.757722 1437114 cri.go:89] found id: ""
	I1209 05:54:10.757753 1437114 logs.go:282] 0 containers: []
	W1209 05:54:10.757761 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:54:10.757771 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:54:10.757784 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:54:10.817777 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:54:10.817812 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:54:10.834055 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:54:10.834083 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:54:10.898677 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:54:10.890728    7018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:10.891591    7018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:10.893093    7018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:10.893520    7018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:10.894977    7018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:54:10.890728    7018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:10.891591    7018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:10.893093    7018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:10.893520    7018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:10.894977    7018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:54:10.898700 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:54:10.898713 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:54:10.923656 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:54:10.923690 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:54:13.467228 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:54:13.477812 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:54:13.477886 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:54:13.503323 1437114 cri.go:89] found id: ""
	I1209 05:54:13.503351 1437114 logs.go:282] 0 containers: []
	W1209 05:54:13.503360 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:54:13.503367 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:54:13.503441 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:54:13.538282 1437114 cri.go:89] found id: ""
	I1209 05:54:13.538310 1437114 logs.go:282] 0 containers: []
	W1209 05:54:13.538318 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:54:13.538324 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:54:13.538382 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:54:13.565556 1437114 cri.go:89] found id: ""
	I1209 05:54:13.565584 1437114 logs.go:282] 0 containers: []
	W1209 05:54:13.565594 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:54:13.565600 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:54:13.565659 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:54:13.594477 1437114 cri.go:89] found id: ""
	I1209 05:54:13.594499 1437114 logs.go:282] 0 containers: []
	W1209 05:54:13.594508 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:54:13.594514 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:54:13.594575 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:54:13.618630 1437114 cri.go:89] found id: ""
	I1209 05:54:13.618651 1437114 logs.go:282] 0 containers: []
	W1209 05:54:13.618658 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:54:13.618664 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:54:13.618720 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:54:13.643760 1437114 cri.go:89] found id: ""
	I1209 05:54:13.643786 1437114 logs.go:282] 0 containers: []
	W1209 05:54:13.643795 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:54:13.643801 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:54:13.643858 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:54:13.669716 1437114 cri.go:89] found id: ""
	I1209 05:54:13.669741 1437114 logs.go:282] 0 containers: []
	W1209 05:54:13.669749 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:54:13.669756 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:54:13.669848 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:54:13.693820 1437114 cri.go:89] found id: ""
	I1209 05:54:13.693847 1437114 logs.go:282] 0 containers: []
	W1209 05:54:13.693855 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:54:13.693864 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:54:13.693875 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:54:13.750893 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:54:13.750940 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:54:13.767174 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:54:13.767247 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:54:13.834450 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:54:13.823547    7135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:13.824086    7135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:13.828520    7135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:13.828897    7135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:13.830390    7135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:54:13.823547    7135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:13.824086    7135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:13.828520    7135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:13.828897    7135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:13.830390    7135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:54:13.834476 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:54:13.834491 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:54:13.860109 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:54:13.860148 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:54:16.386616 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:54:16.396767 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:54:16.396835 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:54:16.421557 1437114 cri.go:89] found id: ""
	I1209 05:54:16.421580 1437114 logs.go:282] 0 containers: []
	W1209 05:54:16.421589 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:54:16.421595 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:54:16.421655 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:54:16.462411 1437114 cri.go:89] found id: ""
	I1209 05:54:16.462432 1437114 logs.go:282] 0 containers: []
	W1209 05:54:16.462441 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:54:16.462447 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:54:16.462505 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:54:16.493789 1437114 cri.go:89] found id: ""
	I1209 05:54:16.493811 1437114 logs.go:282] 0 containers: []
	W1209 05:54:16.493819 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:54:16.493825 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:54:16.493887 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:54:16.523482 1437114 cri.go:89] found id: ""
	I1209 05:54:16.523504 1437114 logs.go:282] 0 containers: []
	W1209 05:54:16.523513 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:54:16.523519 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:54:16.523578 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:54:16.548318 1437114 cri.go:89] found id: ""
	I1209 05:54:16.548354 1437114 logs.go:282] 0 containers: []
	W1209 05:54:16.548363 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:54:16.548386 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:54:16.548471 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:54:16.573131 1437114 cri.go:89] found id: ""
	I1209 05:54:16.573158 1437114 logs.go:282] 0 containers: []
	W1209 05:54:16.573167 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:54:16.573173 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:54:16.573233 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:54:16.596652 1437114 cri.go:89] found id: ""
	I1209 05:54:16.596680 1437114 logs.go:282] 0 containers: []
	W1209 05:54:16.596689 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:54:16.596695 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:54:16.596754 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:54:16.622109 1437114 cri.go:89] found id: ""
	I1209 05:54:16.622131 1437114 logs.go:282] 0 containers: []
	W1209 05:54:16.622139 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:54:16.622148 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:54:16.622160 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:54:16.637977 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:54:16.638014 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:54:16.701887 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:54:16.693598    7245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:16.694125    7245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:16.695778    7245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:16.696319    7245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:16.697759    7245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:54:16.693598    7245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:16.694125    7245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:16.695778    7245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:16.696319    7245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:16.697759    7245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:54:16.701914 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:54:16.701927 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:54:16.728328 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:54:16.728362 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:54:16.756551 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:54:16.756581 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:54:19.313862 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:54:19.323798 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:54:19.323881 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:54:19.348899 1437114 cri.go:89] found id: ""
	I1209 05:54:19.348924 1437114 logs.go:282] 0 containers: []
	W1209 05:54:19.348932 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:54:19.348939 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:54:19.348996 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:54:19.373133 1437114 cri.go:89] found id: ""
	I1209 05:54:19.373156 1437114 logs.go:282] 0 containers: []
	W1209 05:54:19.373164 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:54:19.373170 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:54:19.373226 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:54:19.397615 1437114 cri.go:89] found id: ""
	I1209 05:54:19.397642 1437114 logs.go:282] 0 containers: []
	W1209 05:54:19.397651 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:54:19.397657 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:54:19.397716 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:54:19.426484 1437114 cri.go:89] found id: ""
	I1209 05:54:19.426505 1437114 logs.go:282] 0 containers: []
	W1209 05:54:19.426513 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:54:19.426519 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:54:19.426575 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:54:19.454826 1437114 cri.go:89] found id: ""
	I1209 05:54:19.454852 1437114 logs.go:282] 0 containers: []
	W1209 05:54:19.454868 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:54:19.454874 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:54:19.454941 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:54:19.483800 1437114 cri.go:89] found id: ""
	I1209 05:54:19.483821 1437114 logs.go:282] 0 containers: []
	W1209 05:54:19.483829 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:54:19.483835 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:54:19.483890 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:54:19.510301 1437114 cri.go:89] found id: ""
	I1209 05:54:19.510322 1437114 logs.go:282] 0 containers: []
	W1209 05:54:19.510330 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:54:19.510336 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:54:19.510392 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:54:19.533740 1437114 cri.go:89] found id: ""
	I1209 05:54:19.533766 1437114 logs.go:282] 0 containers: []
	W1209 05:54:19.533775 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:54:19.533785 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:54:19.533797 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:54:19.590533 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:54:19.590609 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:54:19.607749 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:54:19.607831 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:54:19.670098 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:54:19.662273    7359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:19.663063    7359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:19.664591    7359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:19.664886    7359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:19.666309    7359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:54:19.662273    7359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:19.663063    7359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:19.664591    7359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:19.664886    7359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:19.666309    7359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:54:19.670121 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:54:19.670135 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:54:19.696365 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:54:19.696401 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:54:22.225234 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:54:22.235522 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:54:22.235590 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:54:22.260044 1437114 cri.go:89] found id: ""
	I1209 05:54:22.260067 1437114 logs.go:282] 0 containers: []
	W1209 05:54:22.260076 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:54:22.260082 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:54:22.260141 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:54:22.283666 1437114 cri.go:89] found id: ""
	I1209 05:54:22.283694 1437114 logs.go:282] 0 containers: []
	W1209 05:54:22.283702 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:54:22.283708 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:54:22.283764 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:54:22.307779 1437114 cri.go:89] found id: ""
	I1209 05:54:22.307812 1437114 logs.go:282] 0 containers: []
	W1209 05:54:22.307821 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:54:22.307827 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:54:22.307884 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:54:22.333595 1437114 cri.go:89] found id: ""
	I1209 05:54:22.333621 1437114 logs.go:282] 0 containers: []
	W1209 05:54:22.333629 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:54:22.333635 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:54:22.333692 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:54:22.357452 1437114 cri.go:89] found id: ""
	I1209 05:54:22.357476 1437114 logs.go:282] 0 containers: []
	W1209 05:54:22.357484 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:54:22.357490 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:54:22.357551 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:54:22.382107 1437114 cri.go:89] found id: ""
	I1209 05:54:22.382170 1437114 logs.go:282] 0 containers: []
	W1209 05:54:22.382184 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:54:22.382192 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:54:22.382251 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:54:22.406738 1437114 cri.go:89] found id: ""
	I1209 05:54:22.406770 1437114 logs.go:282] 0 containers: []
	W1209 05:54:22.406780 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:54:22.406787 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:54:22.406858 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:54:22.432967 1437114 cri.go:89] found id: ""
	I1209 05:54:22.433002 1437114 logs.go:282] 0 containers: []
	W1209 05:54:22.433011 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:54:22.433020 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:54:22.433030 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:54:22.496308 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:54:22.496347 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:54:22.513215 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:54:22.513243 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:54:22.576557 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:54:22.568457    7472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:22.569106    7472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:22.570813    7472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:22.571288    7472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:22.572769    7472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:54:22.568457    7472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:22.569106    7472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:22.570813    7472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:22.571288    7472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:22.572769    7472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:54:22.576620 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:54:22.576641 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:54:22.601775 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:54:22.601808 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:54:25.129209 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:54:25.140801 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:54:25.140875 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:54:25.167673 1437114 cri.go:89] found id: ""
	I1209 05:54:25.167699 1437114 logs.go:282] 0 containers: []
	W1209 05:54:25.167708 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:54:25.167714 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:54:25.167774 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:54:25.213289 1437114 cri.go:89] found id: ""
	I1209 05:54:25.213317 1437114 logs.go:282] 0 containers: []
	W1209 05:54:25.213326 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:54:25.213332 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:54:25.213394 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:54:25.251150 1437114 cri.go:89] found id: ""
	I1209 05:54:25.251173 1437114 logs.go:282] 0 containers: []
	W1209 05:54:25.251181 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:54:25.251187 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:54:25.251251 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:54:25.278324 1437114 cri.go:89] found id: ""
	I1209 05:54:25.278347 1437114 logs.go:282] 0 containers: []
	W1209 05:54:25.278355 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:54:25.278361 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:54:25.278426 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:54:25.305947 1437114 cri.go:89] found id: ""
	I1209 05:54:25.305968 1437114 logs.go:282] 0 containers: []
	W1209 05:54:25.305976 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:54:25.305982 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:54:25.306043 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:54:25.330741 1437114 cri.go:89] found id: ""
	I1209 05:54:25.330766 1437114 logs.go:282] 0 containers: []
	W1209 05:54:25.330774 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:54:25.330780 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:54:25.330842 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:54:25.357251 1437114 cri.go:89] found id: ""
	I1209 05:54:25.357289 1437114 logs.go:282] 0 containers: []
	W1209 05:54:25.357297 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:54:25.357303 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:54:25.357361 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:54:25.381550 1437114 cri.go:89] found id: ""
	I1209 05:54:25.381574 1437114 logs.go:282] 0 containers: []
	W1209 05:54:25.381582 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:54:25.381643 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:54:25.381661 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:54:25.407792 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:54:25.407826 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:54:25.444380 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:54:25.444411 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:54:25.508703 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:54:25.508739 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:54:25.525308 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:54:25.525335 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:54:25.590403 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:54:25.582560    7600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:25.583141    7600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:25.584775    7600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:25.585120    7600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:25.586571    7600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:54:25.582560    7600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:25.583141    7600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:25.584775    7600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:25.585120    7600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:25.586571    7600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:54:28.090673 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:54:28.101806 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:54:28.101927 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:54:28.126175 1437114 cri.go:89] found id: ""
	I1209 05:54:28.126210 1437114 logs.go:282] 0 containers: []
	W1209 05:54:28.126219 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:54:28.126225 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:54:28.126302 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:54:28.154842 1437114 cri.go:89] found id: ""
	I1209 05:54:28.154863 1437114 logs.go:282] 0 containers: []
	W1209 05:54:28.154872 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:54:28.154878 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:54:28.154936 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:54:28.181513 1437114 cri.go:89] found id: ""
	I1209 05:54:28.181536 1437114 logs.go:282] 0 containers: []
	W1209 05:54:28.181543 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:54:28.181550 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:54:28.181606 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:54:28.208958 1437114 cri.go:89] found id: ""
	I1209 05:54:28.208979 1437114 logs.go:282] 0 containers: []
	W1209 05:54:28.208987 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:54:28.208993 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:54:28.209051 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:54:28.236261 1437114 cri.go:89] found id: ""
	I1209 05:54:28.236288 1437114 logs.go:282] 0 containers: []
	W1209 05:54:28.236296 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:54:28.236308 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:54:28.236365 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:54:28.264550 1437114 cri.go:89] found id: ""
	I1209 05:54:28.264573 1437114 logs.go:282] 0 containers: []
	W1209 05:54:28.264582 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:54:28.264588 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:54:28.264645 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:54:28.288754 1437114 cri.go:89] found id: ""
	I1209 05:54:28.288779 1437114 logs.go:282] 0 containers: []
	W1209 05:54:28.288787 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:54:28.288805 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:54:28.288865 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:54:28.311894 1437114 cri.go:89] found id: ""
	I1209 05:54:28.311922 1437114 logs.go:282] 0 containers: []
	W1209 05:54:28.311931 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:54:28.311941 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:54:28.311952 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:54:28.368882 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:54:28.368916 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:54:28.385073 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:54:28.385102 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:54:28.453852 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:54:28.445585    7699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:28.446317    7699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:28.447999    7699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:28.448560    7699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:28.449990    7699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:54:28.445585    7699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:28.446317    7699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:28.447999    7699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:28.448560    7699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:28.449990    7699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:54:28.453912 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:54:28.453948 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:54:28.481464 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:54:28.481542 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:54:31.017971 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:54:31.028776 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:54:31.028848 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:54:31.059955 1437114 cri.go:89] found id: ""
	I1209 05:54:31.059979 1437114 logs.go:282] 0 containers: []
	W1209 05:54:31.059988 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:54:31.059995 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:54:31.060087 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:54:31.085360 1437114 cri.go:89] found id: ""
	I1209 05:54:31.085389 1437114 logs.go:282] 0 containers: []
	W1209 05:54:31.085398 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:54:31.085404 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:54:31.085466 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:54:31.112050 1437114 cri.go:89] found id: ""
	I1209 05:54:31.112083 1437114 logs.go:282] 0 containers: []
	W1209 05:54:31.112092 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:54:31.112100 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:54:31.112170 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:54:31.139102 1437114 cri.go:89] found id: ""
	I1209 05:54:31.139138 1437114 logs.go:282] 0 containers: []
	W1209 05:54:31.139147 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:54:31.139153 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:54:31.139223 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:54:31.166677 1437114 cri.go:89] found id: ""
	I1209 05:54:31.166710 1437114 logs.go:282] 0 containers: []
	W1209 05:54:31.166720 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:54:31.166727 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:54:31.166818 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:54:31.204582 1437114 cri.go:89] found id: ""
	I1209 05:54:31.204610 1437114 logs.go:282] 0 containers: []
	W1209 05:54:31.204619 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:54:31.204626 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:54:31.204693 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:54:31.242874 1437114 cri.go:89] found id: ""
	I1209 05:54:31.242900 1437114 logs.go:282] 0 containers: []
	W1209 05:54:31.242909 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:54:31.242916 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:54:31.242991 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:54:31.268196 1437114 cri.go:89] found id: ""
	I1209 05:54:31.268225 1437114 logs.go:282] 0 containers: []
	W1209 05:54:31.268234 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:54:31.268243 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:54:31.268254 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:54:31.293521 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:54:31.293559 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:54:31.321144 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:54:31.321175 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:54:31.378617 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:54:31.378656 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:54:31.394506 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:54:31.394533 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:54:31.467240 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:54:31.458393    7825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:31.459167    7825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:31.460831    7825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:31.461408    7825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:31.463045    7825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:54:31.458393    7825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:31.459167    7825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:31.460831    7825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:31.461408    7825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:31.463045    7825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:54:33.967506 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:54:33.977826 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:54:33.977902 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:54:34.002325 1437114 cri.go:89] found id: ""
	I1209 05:54:34.002351 1437114 logs.go:282] 0 containers: []
	W1209 05:54:34.002360 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:54:34.002367 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:54:34.002443 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:54:34.029888 1437114 cri.go:89] found id: ""
	I1209 05:54:34.029919 1437114 logs.go:282] 0 containers: []
	W1209 05:54:34.029928 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:54:34.029935 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:54:34.029996 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:54:34.058673 1437114 cri.go:89] found id: ""
	I1209 05:54:34.058698 1437114 logs.go:282] 0 containers: []
	W1209 05:54:34.058706 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:54:34.058712 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:54:34.058783 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:54:34.083346 1437114 cri.go:89] found id: ""
	I1209 05:54:34.083370 1437114 logs.go:282] 0 containers: []
	W1209 05:54:34.083379 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:54:34.083385 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:54:34.083453 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:54:34.108098 1437114 cri.go:89] found id: ""
	I1209 05:54:34.108126 1437114 logs.go:282] 0 containers: []
	W1209 05:54:34.108135 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:54:34.108141 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:54:34.108227 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:54:34.133779 1437114 cri.go:89] found id: ""
	I1209 05:54:34.133803 1437114 logs.go:282] 0 containers: []
	W1209 05:54:34.133812 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:54:34.133819 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:54:34.133877 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:54:34.161528 1437114 cri.go:89] found id: ""
	I1209 05:54:34.161607 1437114 logs.go:282] 0 containers: []
	W1209 05:54:34.161639 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:54:34.161662 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:54:34.161779 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:54:34.191325 1437114 cri.go:89] found id: ""
	I1209 05:54:34.191400 1437114 logs.go:282] 0 containers: []
	W1209 05:54:34.191423 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:54:34.191443 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:54:34.191493 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:54:34.258939 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:54:34.258977 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:54:34.275607 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:54:34.275640 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:54:34.346638 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:54:34.338621    7924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:34.339268    7924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:34.340363    7924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:34.340982    7924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:34.342615    7924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:54:34.338621    7924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:34.339268    7924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:34.340363    7924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:34.340982    7924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:34.342615    7924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:54:34.346709 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:54:34.346754 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:54:34.373053 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:54:34.373092 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:54:36.904183 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:54:36.914625 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:54:36.914703 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:54:36.939165 1437114 cri.go:89] found id: ""
	I1209 05:54:36.939204 1437114 logs.go:282] 0 containers: []
	W1209 05:54:36.939213 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:54:36.939220 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:54:36.939280 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:54:36.968277 1437114 cri.go:89] found id: ""
	I1209 05:54:36.968303 1437114 logs.go:282] 0 containers: []
	W1209 05:54:36.968312 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:54:36.968319 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:54:36.968379 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:54:36.993837 1437114 cri.go:89] found id: ""
	I1209 05:54:36.993866 1437114 logs.go:282] 0 containers: []
	W1209 05:54:36.993875 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:54:36.993882 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:54:36.993939 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:54:37.029321 1437114 cri.go:89] found id: ""
	I1209 05:54:37.029358 1437114 logs.go:282] 0 containers: []
	W1209 05:54:37.029370 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:54:37.029381 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:54:37.029479 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:54:37.060208 1437114 cri.go:89] found id: ""
	I1209 05:54:37.060235 1437114 logs.go:282] 0 containers: []
	W1209 05:54:37.060244 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:54:37.060251 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:54:37.060311 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:54:37.085969 1437114 cri.go:89] found id: ""
	I1209 05:54:37.085992 1437114 logs.go:282] 0 containers: []
	W1209 05:54:37.086001 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:54:37.086007 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:54:37.086066 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:54:37.114324 1437114 cri.go:89] found id: ""
	I1209 05:54:37.114357 1437114 logs.go:282] 0 containers: []
	W1209 05:54:37.114367 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:54:37.114373 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:54:37.114478 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:54:37.143312 1437114 cri.go:89] found id: ""
	I1209 05:54:37.143339 1437114 logs.go:282] 0 containers: []
	W1209 05:54:37.143348 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:54:37.143357 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:54:37.143369 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:54:37.234893 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:54:37.226773    8026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:37.227615    8026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:37.228809    8026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:37.229450    8026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:37.231054    8026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:54:37.226773    8026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:37.227615    8026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:37.228809    8026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:37.229450    8026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:37.231054    8026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:54:37.234921 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:54:37.234933 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:54:37.262601 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:54:37.262635 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:54:37.289433 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:54:37.289458 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:54:37.345400 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:54:37.345435 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:54:39.861840 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:54:39.873772 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:54:39.873850 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:54:39.901691 1437114 cri.go:89] found id: ""
	I1209 05:54:39.901714 1437114 logs.go:282] 0 containers: []
	W1209 05:54:39.901725 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:54:39.901731 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:54:39.901793 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:54:39.925900 1437114 cri.go:89] found id: ""
	I1209 05:54:39.925935 1437114 logs.go:282] 0 containers: []
	W1209 05:54:39.925944 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:54:39.925950 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:54:39.926009 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:54:39.951997 1437114 cri.go:89] found id: ""
	I1209 05:54:39.952041 1437114 logs.go:282] 0 containers: []
	W1209 05:54:39.952050 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:54:39.952056 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:54:39.952116 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:54:39.980156 1437114 cri.go:89] found id: ""
	I1209 05:54:39.980182 1437114 logs.go:282] 0 containers: []
	W1209 05:54:39.980190 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:54:39.980196 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:54:39.980255 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:54:40.007109 1437114 cri.go:89] found id: ""
	I1209 05:54:40.007136 1437114 logs.go:282] 0 containers: []
	W1209 05:54:40.007146 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:54:40.007154 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:54:40.007234 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:54:40.056170 1437114 cri.go:89] found id: ""
	I1209 05:54:40.056197 1437114 logs.go:282] 0 containers: []
	W1209 05:54:40.056207 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:54:40.056214 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:54:40.056298 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:54:40.085850 1437114 cri.go:89] found id: ""
	I1209 05:54:40.085879 1437114 logs.go:282] 0 containers: []
	W1209 05:54:40.085888 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:54:40.085894 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:54:40.085960 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:54:40.118208 1437114 cri.go:89] found id: ""
	I1209 05:54:40.118245 1437114 logs.go:282] 0 containers: []
	W1209 05:54:40.118256 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:54:40.118267 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:54:40.118281 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:54:40.195166 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:54:40.184383    8140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:40.185244    8140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:40.187445    8140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:40.188458    8140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:40.189404    8140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:54:40.184383    8140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:40.185244    8140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:40.187445    8140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:40.188458    8140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:40.189404    8140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:54:40.195189 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:54:40.195203 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:54:40.223567 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:54:40.223651 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:54:40.266759 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:54:40.266786 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:54:40.323783 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:54:40.323818 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:54:42.842021 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:54:42.852681 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:54:42.852755 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:54:42.876598 1437114 cri.go:89] found id: ""
	I1209 05:54:42.876622 1437114 logs.go:282] 0 containers: []
	W1209 05:54:42.876631 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:54:42.876637 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:54:42.876694 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:54:42.901491 1437114 cri.go:89] found id: ""
	I1209 05:54:42.901515 1437114 logs.go:282] 0 containers: []
	W1209 05:54:42.901523 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:54:42.901529 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:54:42.901588 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:54:42.930050 1437114 cri.go:89] found id: ""
	I1209 05:54:42.930077 1437114 logs.go:282] 0 containers: []
	W1209 05:54:42.930086 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:54:42.930093 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:54:42.930151 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:54:42.953794 1437114 cri.go:89] found id: ""
	I1209 05:54:42.953817 1437114 logs.go:282] 0 containers: []
	W1209 05:54:42.953825 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:54:42.953837 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:54:42.953940 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:54:42.977300 1437114 cri.go:89] found id: ""
	I1209 05:54:42.977324 1437114 logs.go:282] 0 containers: []
	W1209 05:54:42.977333 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:54:42.977339 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:54:42.977416 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:54:43.001015 1437114 cri.go:89] found id: ""
	I1209 05:54:43.001080 1437114 logs.go:282] 0 containers: []
	W1209 05:54:43.001095 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:54:43.001103 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:54:43.001169 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:54:43.026886 1437114 cri.go:89] found id: ""
	I1209 05:54:43.026910 1437114 logs.go:282] 0 containers: []
	W1209 05:54:43.026918 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:54:43.026925 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:54:43.026984 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:54:43.057227 1437114 cri.go:89] found id: ""
	I1209 05:54:43.057253 1437114 logs.go:282] 0 containers: []
	W1209 05:54:43.057271 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:54:43.057281 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:54:43.057293 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:54:43.115319 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:54:43.115357 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:54:43.131310 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:54:43.131346 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:54:43.204953 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:54:43.196603    8257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:43.197525    8257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:43.199091    8257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:43.199623    8257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:43.201121    8257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:54:43.196603    8257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:43.197525    8257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:43.199091    8257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:43.199623    8257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:43.201121    8257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:54:43.204975 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:54:43.204987 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:54:43.231713 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:54:43.231747 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:54:45.766147 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:54:45.776210 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:54:45.776285 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:54:45.804782 1437114 cri.go:89] found id: ""
	I1209 05:54:45.804810 1437114 logs.go:282] 0 containers: []
	W1209 05:54:45.804857 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:54:45.804871 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:54:45.804939 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:54:45.828660 1437114 cri.go:89] found id: ""
	I1209 05:54:45.828684 1437114 logs.go:282] 0 containers: []
	W1209 05:54:45.828692 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:54:45.828698 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:54:45.828758 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:54:45.853575 1437114 cri.go:89] found id: ""
	I1209 05:54:45.853598 1437114 logs.go:282] 0 containers: []
	W1209 05:54:45.853606 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:54:45.853612 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:54:45.853667 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:54:45.877674 1437114 cri.go:89] found id: ""
	I1209 05:54:45.877697 1437114 logs.go:282] 0 containers: []
	W1209 05:54:45.877705 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:54:45.877711 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:54:45.877775 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:54:45.902246 1437114 cri.go:89] found id: ""
	I1209 05:54:45.902270 1437114 logs.go:282] 0 containers: []
	W1209 05:54:45.902284 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:54:45.902291 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:54:45.902347 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:54:45.929443 1437114 cri.go:89] found id: ""
	I1209 05:54:45.929517 1437114 logs.go:282] 0 containers: []
	W1209 05:54:45.929532 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:54:45.929539 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:54:45.929596 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:54:45.955032 1437114 cri.go:89] found id: ""
	I1209 05:54:45.955065 1437114 logs.go:282] 0 containers: []
	W1209 05:54:45.955074 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:54:45.955081 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:54:45.955147 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:54:45.983502 1437114 cri.go:89] found id: ""
	I1209 05:54:45.983527 1437114 logs.go:282] 0 containers: []
	W1209 05:54:45.983535 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:54:45.983544 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:54:45.983555 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:54:46.049253 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:54:46.049292 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:54:46.066199 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:54:46.066229 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:54:46.133498 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:54:46.124747    8372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:46.125334    8372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:46.126986    8372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:46.127505    8372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:46.129096    8372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:54:46.124747    8372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:46.125334    8372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:46.126986    8372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:46.127505    8372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:46.129096    8372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:54:46.133521 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:54:46.133534 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:54:46.159468 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:54:46.159500 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:54:48.698046 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:54:48.710430 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:54:48.710504 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:54:48.739692 1437114 cri.go:89] found id: ""
	I1209 05:54:48.739718 1437114 logs.go:282] 0 containers: []
	W1209 05:54:48.739726 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:54:48.739733 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:54:48.739790 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:54:48.764166 1437114 cri.go:89] found id: ""
	I1209 05:54:48.764192 1437114 logs.go:282] 0 containers: []
	W1209 05:54:48.764200 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:54:48.764206 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:54:48.764264 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:54:48.788074 1437114 cri.go:89] found id: ""
	I1209 05:54:48.788097 1437114 logs.go:282] 0 containers: []
	W1209 05:54:48.788114 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:54:48.788122 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:54:48.788189 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:54:48.813373 1437114 cri.go:89] found id: ""
	I1209 05:54:48.813398 1437114 logs.go:282] 0 containers: []
	W1209 05:54:48.813407 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:54:48.813414 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:54:48.813472 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:54:48.840222 1437114 cri.go:89] found id: ""
	I1209 05:54:48.840248 1437114 logs.go:282] 0 containers: []
	W1209 05:54:48.840256 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:54:48.840270 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:54:48.840331 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:54:48.869002 1437114 cri.go:89] found id: ""
	I1209 05:54:48.869025 1437114 logs.go:282] 0 containers: []
	W1209 05:54:48.869034 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:54:48.869041 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:54:48.869098 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:54:48.897074 1437114 cri.go:89] found id: ""
	I1209 05:54:48.897100 1437114 logs.go:282] 0 containers: []
	W1209 05:54:48.897108 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:54:48.897115 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:54:48.897193 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:54:48.920534 1437114 cri.go:89] found id: ""
	I1209 05:54:48.920559 1437114 logs.go:282] 0 containers: []
	W1209 05:54:48.920567 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:54:48.920576 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:54:48.920588 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:54:48.976882 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:54:48.976918 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:54:48.992754 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:54:48.992782 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:54:49.058058 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:54:49.049574    8487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:49.050149    8487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:49.051765    8487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:49.052269    8487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:49.053870    8487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:54:49.049574    8487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:49.050149    8487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:49.051765    8487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:49.052269    8487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:49.053870    8487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:54:49.058079 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:54:49.058092 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:54:49.083543 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:54:49.083578 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:54:51.613470 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:54:51.625228 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:54:51.625329 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:54:51.651832 1437114 cri.go:89] found id: ""
	I1209 05:54:51.651863 1437114 logs.go:282] 0 containers: []
	W1209 05:54:51.651871 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:54:51.651878 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:54:51.651989 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:54:51.689430 1437114 cri.go:89] found id: ""
	I1209 05:54:51.689471 1437114 logs.go:282] 0 containers: []
	W1209 05:54:51.689480 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:54:51.689486 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:54:51.689556 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:54:51.718333 1437114 cri.go:89] found id: ""
	I1209 05:54:51.718377 1437114 logs.go:282] 0 containers: []
	W1209 05:54:51.718387 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:54:51.718394 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:54:51.718468 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:54:51.748566 1437114 cri.go:89] found id: ""
	I1209 05:54:51.748641 1437114 logs.go:282] 0 containers: []
	W1209 05:54:51.748656 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:54:51.748663 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:54:51.748732 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:54:51.773149 1437114 cri.go:89] found id: ""
	I1209 05:54:51.773175 1437114 logs.go:282] 0 containers: []
	W1209 05:54:51.773184 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:54:51.773191 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:54:51.773283 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:54:51.802227 1437114 cri.go:89] found id: ""
	I1209 05:54:51.802253 1437114 logs.go:282] 0 containers: []
	W1209 05:54:51.802262 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:54:51.802272 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:54:51.802351 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:54:51.833926 1437114 cri.go:89] found id: ""
	I1209 05:54:51.833994 1437114 logs.go:282] 0 containers: []
	W1209 05:54:51.834016 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:54:51.834036 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:54:51.834126 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:54:51.859887 1437114 cri.go:89] found id: ""
	I1209 05:54:51.859919 1437114 logs.go:282] 0 containers: []
	W1209 05:54:51.859927 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:54:51.859937 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:54:51.859948 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:54:51.876110 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:54:51.876138 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:54:51.942848 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:54:51.934424    8598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:51.935014    8598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:51.936468    8598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:51.937091    8598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:51.938535    8598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:54:51.934424    8598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:51.935014    8598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:51.936468    8598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:51.937091    8598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:51.938535    8598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:54:51.942870 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:54:51.942883 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:54:51.968433 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:54:51.968466 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:54:51.996383 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:54:51.996421 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:54:54.554719 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:54:54.565346 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:54:54.565415 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:54:54.593426 1437114 cri.go:89] found id: ""
	I1209 05:54:54.593450 1437114 logs.go:282] 0 containers: []
	W1209 05:54:54.593458 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:54:54.593464 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:54:54.593522 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:54:54.621281 1437114 cri.go:89] found id: ""
	I1209 05:54:54.621304 1437114 logs.go:282] 0 containers: []
	W1209 05:54:54.621312 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:54:54.621318 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:54:54.621376 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:54:54.646126 1437114 cri.go:89] found id: ""
	I1209 05:54:54.646194 1437114 logs.go:282] 0 containers: []
	W1209 05:54:54.646216 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:54:54.646234 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:54:54.646318 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:54:54.674944 1437114 cri.go:89] found id: ""
	I1209 05:54:54.674986 1437114 logs.go:282] 0 containers: []
	W1209 05:54:54.675011 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:54:54.675029 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:54:54.675110 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:54:54.700733 1437114 cri.go:89] found id: ""
	I1209 05:54:54.700766 1437114 logs.go:282] 0 containers: []
	W1209 05:54:54.700775 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:54:54.700781 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:54:54.700860 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:54:54.733555 1437114 cri.go:89] found id: ""
	I1209 05:54:54.733634 1437114 logs.go:282] 0 containers: []
	W1209 05:54:54.733656 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:54:54.733676 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:54:54.733777 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:54:54.759852 1437114 cri.go:89] found id: ""
	I1209 05:54:54.759926 1437114 logs.go:282] 0 containers: []
	W1209 05:54:54.759949 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:54:54.759972 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:54:54.760110 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:54:54.784303 1437114 cri.go:89] found id: ""
	I1209 05:54:54.784377 1437114 logs.go:282] 0 containers: []
	W1209 05:54:54.784392 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:54:54.784402 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:54:54.784413 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:54:54.809753 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:54:54.809790 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:54:54.836589 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:54:54.836617 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:54:54.899737 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:54:54.899784 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:54:54.915785 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:54:54.915814 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:54:54.979896 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:54:54.971488    8724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:54.971906    8724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:54.973479    8724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:54.974140    8724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:54.976063    8724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:54:54.971488    8724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:54.971906    8724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:54.973479    8724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:54.974140    8724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:54.976063    8724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:54:57.480193 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:54:57.491395 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:54:57.491473 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:54:57.518091 1437114 cri.go:89] found id: ""
	I1209 05:54:57.518114 1437114 logs.go:282] 0 containers: []
	W1209 05:54:57.518123 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:54:57.518130 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:54:57.518191 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:54:57.545921 1437114 cri.go:89] found id: ""
	I1209 05:54:57.545954 1437114 logs.go:282] 0 containers: []
	W1209 05:54:57.545962 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:54:57.545969 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:54:57.546037 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:54:57.570249 1437114 cri.go:89] found id: ""
	I1209 05:54:57.570280 1437114 logs.go:282] 0 containers: []
	W1209 05:54:57.570290 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:54:57.570296 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:54:57.570367 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:54:57.597541 1437114 cri.go:89] found id: ""
	I1209 05:54:57.597565 1437114 logs.go:282] 0 containers: []
	W1209 05:54:57.597576 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:54:57.597583 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:54:57.597639 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:54:57.625351 1437114 cri.go:89] found id: ""
	I1209 05:54:57.625374 1437114 logs.go:282] 0 containers: []
	W1209 05:54:57.625382 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:54:57.625388 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:54:57.625446 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:54:57.653430 1437114 cri.go:89] found id: ""
	I1209 05:54:57.653504 1437114 logs.go:282] 0 containers: []
	W1209 05:54:57.653520 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:54:57.653528 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:54:57.653592 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:54:57.686655 1437114 cri.go:89] found id: ""
	I1209 05:54:57.686681 1437114 logs.go:282] 0 containers: []
	W1209 05:54:57.686704 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:54:57.686711 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:54:57.686783 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:54:57.715897 1437114 cri.go:89] found id: ""
	I1209 05:54:57.715924 1437114 logs.go:282] 0 containers: []
	W1209 05:54:57.715932 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:54:57.715941 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:54:57.715952 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:54:57.781835 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:54:57.781871 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:54:57.798499 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:54:57.798527 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:54:57.870136 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:54:57.856259    8825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:57.861442    8825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:57.864278    8825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:57.864723    8825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:57.866272    8825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:54:57.856259    8825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:57.861442    8825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:57.864278    8825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:57.864723    8825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:57.866272    8825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:54:57.870169 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:54:57.870182 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:54:57.894760 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:54:57.894794 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:55:00.423491 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:55:00.436333 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:55:00.436416 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:55:00.477329 1437114 cri.go:89] found id: ""
	I1209 05:55:00.477357 1437114 logs.go:282] 0 containers: []
	W1209 05:55:00.477367 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:55:00.477373 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:55:00.477440 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:55:00.510439 1437114 cri.go:89] found id: ""
	I1209 05:55:00.510467 1437114 logs.go:282] 0 containers: []
	W1209 05:55:00.510477 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:55:00.510483 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:55:00.510565 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:55:00.539373 1437114 cri.go:89] found id: ""
	I1209 05:55:00.539404 1437114 logs.go:282] 0 containers: []
	W1209 05:55:00.539413 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:55:00.539420 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:55:00.539484 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:55:00.567440 1437114 cri.go:89] found id: ""
	I1209 05:55:00.567470 1437114 logs.go:282] 0 containers: []
	W1209 05:55:00.567479 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:55:00.567486 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:55:00.567547 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:55:00.603417 1437114 cri.go:89] found id: ""
	I1209 05:55:00.603442 1437114 logs.go:282] 0 containers: []
	W1209 05:55:00.603450 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:55:00.603456 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:55:00.603515 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:55:00.628877 1437114 cri.go:89] found id: ""
	I1209 05:55:00.628900 1437114 logs.go:282] 0 containers: []
	W1209 05:55:00.628909 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:55:00.628915 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:55:00.628972 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:55:00.657533 1437114 cri.go:89] found id: ""
	I1209 05:55:00.657562 1437114 logs.go:282] 0 containers: []
	W1209 05:55:00.657571 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:55:00.657578 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:55:00.657638 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:55:00.686066 1437114 cri.go:89] found id: ""
	I1209 05:55:00.686090 1437114 logs.go:282] 0 containers: []
	W1209 05:55:00.686099 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:55:00.686108 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:55:00.686120 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:55:00.708508 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:55:00.708588 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:55:00.777301 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:55:00.768863    8933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:00.769274    8933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:00.770892    8933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:00.771415    8933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:00.772464    8933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:55:00.768863    8933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:00.769274    8933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:00.770892    8933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:00.771415    8933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:00.772464    8933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:55:00.777372 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:55:00.777394 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:55:00.802304 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:55:00.802337 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:55:00.829410 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:55:00.829436 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:55:03.385877 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:55:03.396171 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:55:03.396238 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:55:03.420742 1437114 cri.go:89] found id: ""
	I1209 05:55:03.420767 1437114 logs.go:282] 0 containers: []
	W1209 05:55:03.420775 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:55:03.420781 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:55:03.420837 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:55:03.458835 1437114 cri.go:89] found id: ""
	I1209 05:55:03.458861 1437114 logs.go:282] 0 containers: []
	W1209 05:55:03.458869 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:55:03.458876 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:55:03.458934 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:55:03.488300 1437114 cri.go:89] found id: ""
	I1209 05:55:03.488326 1437114 logs.go:282] 0 containers: []
	W1209 05:55:03.488334 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:55:03.488340 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:55:03.488400 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:55:03.516405 1437114 cri.go:89] found id: ""
	I1209 05:55:03.516432 1437114 logs.go:282] 0 containers: []
	W1209 05:55:03.516440 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:55:03.516446 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:55:03.516506 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:55:03.545401 1437114 cri.go:89] found id: ""
	I1209 05:55:03.545467 1437114 logs.go:282] 0 containers: []
	W1209 05:55:03.545492 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:55:03.545510 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:55:03.545597 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:55:03.570243 1437114 cri.go:89] found id: ""
	I1209 05:55:03.570316 1437114 logs.go:282] 0 containers: []
	W1209 05:55:03.570342 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:55:03.570357 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:55:03.570449 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:55:03.594930 1437114 cri.go:89] found id: ""
	I1209 05:55:03.594955 1437114 logs.go:282] 0 containers: []
	W1209 05:55:03.594965 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:55:03.594971 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:55:03.595030 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:55:03.619052 1437114 cri.go:89] found id: ""
	I1209 05:55:03.619080 1437114 logs.go:282] 0 containers: []
	W1209 05:55:03.619089 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:55:03.619098 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:55:03.619114 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:55:03.676980 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:55:03.677019 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:55:03.697398 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:55:03.697427 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:55:03.769575 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:55:03.761997    9046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:03.762424    9046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:03.763695    9046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:03.764060    9046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:03.765630    9046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:55:03.761997    9046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:03.762424    9046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:03.763695    9046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:03.764060    9046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:03.765630    9046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:55:03.769607 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:55:03.769620 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:55:03.794589 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:55:03.794623 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:55:06.321615 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:55:06.331929 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:55:06.331999 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:55:06.358377 1437114 cri.go:89] found id: ""
	I1209 05:55:06.358403 1437114 logs.go:282] 0 containers: []
	W1209 05:55:06.358411 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:55:06.358418 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:55:06.358481 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:55:06.384508 1437114 cri.go:89] found id: ""
	I1209 05:55:06.384533 1437114 logs.go:282] 0 containers: []
	W1209 05:55:06.384542 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:55:06.384548 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:55:06.384607 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:55:06.408779 1437114 cri.go:89] found id: ""
	I1209 05:55:06.408801 1437114 logs.go:282] 0 containers: []
	W1209 05:55:06.408810 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:55:06.408816 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:55:06.408874 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:55:06.441591 1437114 cri.go:89] found id: ""
	I1209 05:55:06.441613 1437114 logs.go:282] 0 containers: []
	W1209 05:55:06.441622 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:55:06.441628 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:55:06.441689 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:55:06.474533 1437114 cri.go:89] found id: ""
	I1209 05:55:06.474555 1437114 logs.go:282] 0 containers: []
	W1209 05:55:06.474567 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:55:06.474574 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:55:06.474706 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:55:06.503583 1437114 cri.go:89] found id: ""
	I1209 05:55:06.503655 1437114 logs.go:282] 0 containers: []
	W1209 05:55:06.503677 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:55:06.503697 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:55:06.503785 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:55:06.529409 1437114 cri.go:89] found id: ""
	I1209 05:55:06.529434 1437114 logs.go:282] 0 containers: []
	W1209 05:55:06.529443 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:55:06.529449 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:55:06.529508 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:55:06.559184 1437114 cri.go:89] found id: ""
	I1209 05:55:06.559254 1437114 logs.go:282] 0 containers: []
	W1209 05:55:06.559289 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:55:06.559317 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:55:06.559341 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:55:06.616116 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:55:06.616152 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:55:06.632189 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:55:06.632218 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:55:06.703879 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:55:06.694883    9157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:06.695859    9157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:06.697486    9157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:06.698063    9157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:06.699592    9157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:55:06.694883    9157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:06.695859    9157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:06.697486    9157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:06.698063    9157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:06.699592    9157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:55:06.703908 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:55:06.703924 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:55:06.733107 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:55:06.733166 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:55:09.268085 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:55:09.278413 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:55:09.278488 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:55:09.301738 1437114 cri.go:89] found id: ""
	I1209 05:55:09.301764 1437114 logs.go:282] 0 containers: []
	W1209 05:55:09.301773 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:55:09.301779 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:55:09.301836 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:55:09.329939 1437114 cri.go:89] found id: ""
	I1209 05:55:09.329962 1437114 logs.go:282] 0 containers: []
	W1209 05:55:09.329970 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:55:09.329976 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:55:09.330032 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:55:09.358792 1437114 cri.go:89] found id: ""
	I1209 05:55:09.358825 1437114 logs.go:282] 0 containers: []
	W1209 05:55:09.358834 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:55:09.358840 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:55:09.358934 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:55:09.383783 1437114 cri.go:89] found id: ""
	I1209 05:55:09.383806 1437114 logs.go:282] 0 containers: []
	W1209 05:55:09.383814 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:55:09.383820 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:55:09.383881 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:55:09.409956 1437114 cri.go:89] found id: ""
	I1209 05:55:09.409982 1437114 logs.go:282] 0 containers: []
	W1209 05:55:09.409990 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:55:09.409997 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:55:09.410054 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:55:09.442388 1437114 cri.go:89] found id: ""
	I1209 05:55:09.442471 1437114 logs.go:282] 0 containers: []
	W1209 05:55:09.442502 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:55:09.442524 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:55:09.442611 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:55:09.472213 1437114 cri.go:89] found id: ""
	I1209 05:55:09.472234 1437114 logs.go:282] 0 containers: []
	W1209 05:55:09.472243 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:55:09.472249 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:55:09.472306 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:55:09.500348 1437114 cri.go:89] found id: ""
	I1209 05:55:09.500372 1437114 logs.go:282] 0 containers: []
	W1209 05:55:09.500381 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:55:09.500390 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:55:09.500401 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:55:09.556960 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:55:09.556998 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:55:09.573143 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:55:09.573173 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:55:09.641645 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:55:09.634078    9266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:09.634591    9266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:09.636259    9266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:09.636782    9266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:09.637775    9266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:55:09.634078    9266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:09.634591    9266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:09.636259    9266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:09.636782    9266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:09.637775    9266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:55:09.641669 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:55:09.641682 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:55:09.667979 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:55:09.668100 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:55:12.205096 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:55:12.215660 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:55:12.215729 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:55:12.239566 1437114 cri.go:89] found id: ""
	I1209 05:55:12.239594 1437114 logs.go:282] 0 containers: []
	W1209 05:55:12.239603 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:55:12.239609 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:55:12.239668 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:55:12.267891 1437114 cri.go:89] found id: ""
	I1209 05:55:12.267914 1437114 logs.go:282] 0 containers: []
	W1209 05:55:12.267924 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:55:12.267930 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:55:12.267992 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:55:12.296494 1437114 cri.go:89] found id: ""
	I1209 05:55:12.296523 1437114 logs.go:282] 0 containers: []
	W1209 05:55:12.296532 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:55:12.296539 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:55:12.296602 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:55:12.322105 1437114 cri.go:89] found id: ""
	I1209 05:55:12.322135 1437114 logs.go:282] 0 containers: []
	W1209 05:55:12.322144 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:55:12.322151 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:55:12.322208 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:55:12.347978 1437114 cri.go:89] found id: ""
	I1209 05:55:12.348001 1437114 logs.go:282] 0 containers: []
	W1209 05:55:12.348010 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:55:12.348038 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:55:12.348096 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:55:12.372241 1437114 cri.go:89] found id: ""
	I1209 05:55:12.372275 1437114 logs.go:282] 0 containers: []
	W1209 05:55:12.372311 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:55:12.372318 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:55:12.372384 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:55:12.397758 1437114 cri.go:89] found id: ""
	I1209 05:55:12.397784 1437114 logs.go:282] 0 containers: []
	W1209 05:55:12.397792 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:55:12.397799 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:55:12.397860 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:55:12.422922 1437114 cri.go:89] found id: ""
	I1209 05:55:12.422948 1437114 logs.go:282] 0 containers: []
	W1209 05:55:12.422958 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:55:12.422968 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:55:12.422981 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:55:12.480231 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:55:12.480268 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:55:12.497991 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:55:12.498029 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:55:12.565247 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:55:12.557686    9379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:12.558053    9379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:12.559575    9379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:12.559888    9379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:12.561291    9379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:55:12.557686    9379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:12.558053    9379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:12.559575    9379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:12.559888    9379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:12.561291    9379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:55:12.565279 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:55:12.565293 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:55:12.590420 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:55:12.590459 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:55:15.122535 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:55:15.133065 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:55:15.133140 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:55:15.158369 1437114 cri.go:89] found id: ""
	I1209 05:55:15.158393 1437114 logs.go:282] 0 containers: []
	W1209 05:55:15.158401 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:55:15.158407 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:55:15.158492 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:55:15.184526 1437114 cri.go:89] found id: ""
	I1209 05:55:15.184550 1437114 logs.go:282] 0 containers: []
	W1209 05:55:15.184558 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:55:15.184564 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:55:15.184627 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:55:15.210248 1437114 cri.go:89] found id: ""
	I1209 05:55:15.210288 1437114 logs.go:282] 0 containers: []
	W1209 05:55:15.210300 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:55:15.210312 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:55:15.210376 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:55:15.239458 1437114 cri.go:89] found id: ""
	I1209 05:55:15.239486 1437114 logs.go:282] 0 containers: []
	W1209 05:55:15.239495 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:55:15.239501 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:55:15.239560 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:55:15.265625 1437114 cri.go:89] found id: ""
	I1209 05:55:15.265649 1437114 logs.go:282] 0 containers: []
	W1209 05:55:15.265658 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:55:15.265664 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:55:15.265729 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:55:15.289543 1437114 cri.go:89] found id: ""
	I1209 05:55:15.289577 1437114 logs.go:282] 0 containers: []
	W1209 05:55:15.289587 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:55:15.289593 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:55:15.289663 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:55:15.314575 1437114 cri.go:89] found id: ""
	I1209 05:55:15.314610 1437114 logs.go:282] 0 containers: []
	W1209 05:55:15.314618 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:55:15.314625 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:55:15.314704 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:55:15.339832 1437114 cri.go:89] found id: ""
	I1209 05:55:15.339858 1437114 logs.go:282] 0 containers: []
	W1209 05:55:15.339865 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:55:15.339875 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:55:15.339890 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:55:15.356748 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:55:15.356774 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:55:15.418122 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:55:15.410189    9483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:15.410797    9483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:15.412374    9483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:15.412679    9483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:15.414149    9483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:55:15.410189    9483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:15.410797    9483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:15.412374    9483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:15.412679    9483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:15.414149    9483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:55:15.418145 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:55:15.418157 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:55:15.446826 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:55:15.446866 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:55:15.483531 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:55:15.483560 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:55:18.042444 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:55:18.053775 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:55:18.053853 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:55:18.090768 1437114 cri.go:89] found id: ""
	I1209 05:55:18.090790 1437114 logs.go:282] 0 containers: []
	W1209 05:55:18.090800 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:55:18.090806 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:55:18.090869 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:55:18.117411 1437114 cri.go:89] found id: ""
	I1209 05:55:18.117438 1437114 logs.go:282] 0 containers: []
	W1209 05:55:18.117448 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:55:18.117458 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:55:18.117516 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:55:18.143495 1437114 cri.go:89] found id: ""
	I1209 05:55:18.143523 1437114 logs.go:282] 0 containers: []
	W1209 05:55:18.143531 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:55:18.143538 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:55:18.143601 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:55:18.169282 1437114 cri.go:89] found id: ""
	I1209 05:55:18.169310 1437114 logs.go:282] 0 containers: []
	W1209 05:55:18.169319 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:55:18.169325 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:55:18.169387 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:55:18.194143 1437114 cri.go:89] found id: ""
	I1209 05:55:18.194210 1437114 logs.go:282] 0 containers: []
	W1209 05:55:18.194234 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:55:18.194248 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:55:18.194319 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:55:18.218540 1437114 cri.go:89] found id: ""
	I1209 05:55:18.218564 1437114 logs.go:282] 0 containers: []
	W1209 05:55:18.218573 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:55:18.218579 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:55:18.218635 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:55:18.242500 1437114 cri.go:89] found id: ""
	I1209 05:55:18.242533 1437114 logs.go:282] 0 containers: []
	W1209 05:55:18.242541 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:55:18.242554 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:55:18.242625 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:55:18.268163 1437114 cri.go:89] found id: ""
	I1209 05:55:18.268189 1437114 logs.go:282] 0 containers: []
	W1209 05:55:18.268198 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:55:18.268207 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:55:18.268219 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:55:18.325316 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:55:18.325352 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:55:18.341326 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:55:18.341355 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:55:18.406565 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:55:18.398134    9600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:18.398838    9600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:18.400544    9600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:18.401064    9600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:18.402624    9600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:55:18.398134    9600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:18.398838    9600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:18.400544    9600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:18.401064    9600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:18.402624    9600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:55:18.406588 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:55:18.406601 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:55:18.432715 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:55:18.433008 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:55:20.971861 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:55:20.983326 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:55:20.983402 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:55:21.009562 1437114 cri.go:89] found id: ""
	I1209 05:55:21.009588 1437114 logs.go:282] 0 containers: []
	W1209 05:55:21.009598 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:55:21.009606 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:55:21.009671 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:55:21.034329 1437114 cri.go:89] found id: ""
	I1209 05:55:21.034355 1437114 logs.go:282] 0 containers: []
	W1209 05:55:21.034364 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:55:21.034370 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:55:21.034444 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:55:21.058554 1437114 cri.go:89] found id: ""
	I1209 05:55:21.058575 1437114 logs.go:282] 0 containers: []
	W1209 05:55:21.058584 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:55:21.058592 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:55:21.058648 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:55:21.086391 1437114 cri.go:89] found id: ""
	I1209 05:55:21.086416 1437114 logs.go:282] 0 containers: []
	W1209 05:55:21.086425 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:55:21.086432 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:55:21.086495 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:55:21.113734 1437114 cri.go:89] found id: ""
	I1209 05:55:21.113757 1437114 logs.go:282] 0 containers: []
	W1209 05:55:21.113771 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:55:21.113777 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:55:21.113836 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:55:21.138081 1437114 cri.go:89] found id: ""
	I1209 05:55:21.138106 1437114 logs.go:282] 0 containers: []
	W1209 05:55:21.138115 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:55:21.138122 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:55:21.138188 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:55:21.162430 1437114 cri.go:89] found id: ""
	I1209 05:55:21.162454 1437114 logs.go:282] 0 containers: []
	W1209 05:55:21.162462 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:55:21.162468 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:55:21.162527 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:55:21.187241 1437114 cri.go:89] found id: ""
	I1209 05:55:21.187269 1437114 logs.go:282] 0 containers: []
	W1209 05:55:21.187277 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:55:21.187286 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:55:21.187298 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:55:21.243731 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:55:21.243768 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:55:21.259723 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:55:21.259752 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:55:21.331265 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:55:21.322926    9714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:21.323669    9714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:21.325163    9714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:21.325582    9714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:21.327036    9714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:55:21.322926    9714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:21.323669    9714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:21.325163    9714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:21.325582    9714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:21.327036    9714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:55:21.331287 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:55:21.331300 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:55:21.357424 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:55:21.357460 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:55:23.888418 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:55:23.899458 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:55:23.899526 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:55:23.923896 1437114 cri.go:89] found id: ""
	I1209 05:55:23.923962 1437114 logs.go:282] 0 containers: []
	W1209 05:55:23.923986 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:55:23.924004 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:55:23.924112 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:55:23.951339 1437114 cri.go:89] found id: ""
	I1209 05:55:23.951409 1437114 logs.go:282] 0 containers: []
	W1209 05:55:23.951432 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:55:23.951450 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:55:23.951535 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:55:23.980727 1437114 cri.go:89] found id: ""
	I1209 05:55:23.980797 1437114 logs.go:282] 0 containers: []
	W1209 05:55:23.980821 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:55:23.980838 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:55:23.980927 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:55:24.018661 1437114 cri.go:89] found id: ""
	I1209 05:55:24.018691 1437114 logs.go:282] 0 containers: []
	W1209 05:55:24.018702 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:55:24.018709 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:55:24.018778 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:55:24.049508 1437114 cri.go:89] found id: ""
	I1209 05:55:24.049536 1437114 logs.go:282] 0 containers: []
	W1209 05:55:24.049545 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:55:24.049551 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:55:24.049610 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:55:24.074712 1437114 cri.go:89] found id: ""
	I1209 05:55:24.074741 1437114 logs.go:282] 0 containers: []
	W1209 05:55:24.074751 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:55:24.074757 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:55:24.074825 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:55:24.100769 1437114 cri.go:89] found id: ""
	I1209 05:55:24.100795 1437114 logs.go:282] 0 containers: []
	W1209 05:55:24.100804 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:55:24.100810 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:55:24.100871 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:55:24.125003 1437114 cri.go:89] found id: ""
	I1209 05:55:24.125031 1437114 logs.go:282] 0 containers: []
	W1209 05:55:24.125039 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:55:24.125049 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:55:24.125061 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:55:24.194763 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:55:24.186517    9818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:24.187020    9818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:24.188525    9818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:24.188998    9818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:24.190667    9818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:55:24.186517    9818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:24.187020    9818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:24.188525    9818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:24.188998    9818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:24.190667    9818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:55:24.194832 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:55:24.194870 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:55:24.220205 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:55:24.220239 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:55:24.246742 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:55:24.246769 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:55:24.303551 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:55:24.303584 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:55:26.819975 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:55:26.830655 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:55:26.830725 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:55:26.858629 1437114 cri.go:89] found id: ""
	I1209 05:55:26.858653 1437114 logs.go:282] 0 containers: []
	W1209 05:55:26.858661 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:55:26.858667 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:55:26.858733 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:55:26.883327 1437114 cri.go:89] found id: ""
	I1209 05:55:26.883354 1437114 logs.go:282] 0 containers: []
	W1209 05:55:26.883363 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:55:26.883369 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:55:26.883431 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:55:26.909455 1437114 cri.go:89] found id: ""
	I1209 05:55:26.909475 1437114 logs.go:282] 0 containers: []
	W1209 05:55:26.909484 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:55:26.909490 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:55:26.909551 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:55:26.940313 1437114 cri.go:89] found id: ""
	I1209 05:55:26.940345 1437114 logs.go:282] 0 containers: []
	W1209 05:55:26.940358 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:55:26.940365 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:55:26.940432 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:55:26.974610 1437114 cri.go:89] found id: ""
	I1209 05:55:26.974686 1437114 logs.go:282] 0 containers: []
	W1209 05:55:26.974708 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:55:26.974725 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:55:26.974817 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:55:27.007512 1437114 cri.go:89] found id: ""
	I1209 05:55:27.007592 1437114 logs.go:282] 0 containers: []
	W1209 05:55:27.007616 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:55:27.007637 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:55:27.007748 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:55:27.032955 1437114 cri.go:89] found id: ""
	I1209 05:55:27.033029 1437114 logs.go:282] 0 containers: []
	W1209 05:55:27.033053 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:55:27.033071 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:55:27.033155 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:55:27.057112 1437114 cri.go:89] found id: ""
	I1209 05:55:27.057177 1437114 logs.go:282] 0 containers: []
	W1209 05:55:27.057191 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:55:27.057202 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:55:27.057219 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:55:27.118936 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:55:27.110736    9930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:27.111264    9930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:27.112691    9930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:27.112981    9930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:27.114451    9930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:55:27.110736    9930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:27.111264    9930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:27.112691    9930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:27.112981    9930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:27.114451    9930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:55:27.118961 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:55:27.118974 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:55:27.144106 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:55:27.144179 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:55:27.174234 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:55:27.174260 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:55:27.230096 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:55:27.230129 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:55:29.746369 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:55:29.756575 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:55:29.756649 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:55:29.784727 1437114 cri.go:89] found id: ""
	I1209 05:55:29.784750 1437114 logs.go:282] 0 containers: []
	W1209 05:55:29.784758 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:55:29.784764 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:55:29.784824 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:55:29.808671 1437114 cri.go:89] found id: ""
	I1209 05:55:29.808696 1437114 logs.go:282] 0 containers: []
	W1209 05:55:29.808705 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:55:29.808711 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:55:29.808793 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:55:29.832440 1437114 cri.go:89] found id: ""
	I1209 05:55:29.832470 1437114 logs.go:282] 0 containers: []
	W1209 05:55:29.832479 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:55:29.832485 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:55:29.832549 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:55:29.857781 1437114 cri.go:89] found id: ""
	I1209 05:55:29.857807 1437114 logs.go:282] 0 containers: []
	W1209 05:55:29.857815 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:55:29.857821 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:55:29.857901 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:55:29.882048 1437114 cri.go:89] found id: ""
	I1209 05:55:29.882073 1437114 logs.go:282] 0 containers: []
	W1209 05:55:29.882081 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:55:29.882087 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:55:29.882176 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:55:29.905398 1437114 cri.go:89] found id: ""
	I1209 05:55:29.905422 1437114 logs.go:282] 0 containers: []
	W1209 05:55:29.905431 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:55:29.905438 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:55:29.905526 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:55:29.931783 1437114 cri.go:89] found id: ""
	I1209 05:55:29.931816 1437114 logs.go:282] 0 containers: []
	W1209 05:55:29.931824 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:55:29.931831 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:55:29.931903 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:55:29.961633 1437114 cri.go:89] found id: ""
	I1209 05:55:29.961665 1437114 logs.go:282] 0 containers: []
	W1209 05:55:29.961673 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:55:29.961683 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:55:29.961695 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:55:30.041769 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:55:30.025451   10042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:30.026529   10042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:30.027374   10042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:30.029780   10042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:30.030693   10042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:55:30.025451   10042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:30.026529   10042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:30.027374   10042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:30.029780   10042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:30.030693   10042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:55:30.041793 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:55:30.041807 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:55:30.069912 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:55:30.069946 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:55:30.104202 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:55:30.104232 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:55:30.162750 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:55:30.162784 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:55:32.680152 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:55:32.694260 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:55:32.694425 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:55:32.728965 1437114 cri.go:89] found id: ""
	I1209 05:55:32.729064 1437114 logs.go:282] 0 containers: []
	W1209 05:55:32.729088 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:55:32.729108 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:55:32.729212 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:55:32.760196 1437114 cri.go:89] found id: ""
	I1209 05:55:32.760220 1437114 logs.go:282] 0 containers: []
	W1209 05:55:32.760228 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:55:32.760235 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:55:32.760303 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:55:32.785415 1437114 cri.go:89] found id: ""
	I1209 05:55:32.785448 1437114 logs.go:282] 0 containers: []
	W1209 05:55:32.785457 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:55:32.785463 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:55:32.785528 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:55:32.809252 1437114 cri.go:89] found id: ""
	I1209 05:55:32.809327 1437114 logs.go:282] 0 containers: []
	W1209 05:55:32.809343 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:55:32.809357 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:55:32.809417 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:55:32.834255 1437114 cri.go:89] found id: ""
	I1209 05:55:32.834281 1437114 logs.go:282] 0 containers: []
	W1209 05:55:32.834295 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:55:32.834302 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:55:32.834362 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:55:32.859400 1437114 cri.go:89] found id: ""
	I1209 05:55:32.859426 1437114 logs.go:282] 0 containers: []
	W1209 05:55:32.859443 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:55:32.859450 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:55:32.859519 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:55:32.897012 1437114 cri.go:89] found id: ""
	I1209 05:55:32.897037 1437114 logs.go:282] 0 containers: []
	W1209 05:55:32.897046 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:55:32.897053 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:55:32.897167 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:55:32.921653 1437114 cri.go:89] found id: ""
	I1209 05:55:32.921685 1437114 logs.go:282] 0 containers: []
	W1209 05:55:32.921693 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:55:32.921703 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:55:32.921713 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:55:32.948373 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:55:32.948454 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:55:32.981605 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:55:32.981678 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:55:33.043445 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:55:33.043481 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:55:33.059128 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:55:33.059160 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:55:33.122257 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:55:33.113462   10172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:33.113864   10172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:33.115638   10172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:33.116342   10172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:33.117919   10172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:55:33.113462   10172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:33.113864   10172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:33.115638   10172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:33.116342   10172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:33.117919   10172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:55:35.623296 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:55:35.635539 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:55:35.635647 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:55:35.663702 1437114 cri.go:89] found id: ""
	I1209 05:55:35.663741 1437114 logs.go:282] 0 containers: []
	W1209 05:55:35.663753 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:55:35.663760 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:55:35.663865 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:55:35.707406 1437114 cri.go:89] found id: ""
	I1209 05:55:35.707485 1437114 logs.go:282] 0 containers: []
	W1209 05:55:35.707508 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:55:35.707544 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:55:35.707629 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:55:35.734669 1437114 cri.go:89] found id: ""
	I1209 05:55:35.734749 1437114 logs.go:282] 0 containers: []
	W1209 05:55:35.734771 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:55:35.734811 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:55:35.734897 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:55:35.764935 1437114 cri.go:89] found id: ""
	I1209 05:55:35.765012 1437114 logs.go:282] 0 containers: []
	W1209 05:55:35.765036 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:55:35.765054 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:55:35.765127 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:55:35.788829 1437114 cri.go:89] found id: ""
	I1209 05:55:35.788853 1437114 logs.go:282] 0 containers: []
	W1209 05:55:35.788869 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:55:35.788876 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:55:35.788978 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:55:35.813639 1437114 cri.go:89] found id: ""
	I1209 05:55:35.813666 1437114 logs.go:282] 0 containers: []
	W1209 05:55:35.813674 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:55:35.813681 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:55:35.813787 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:55:35.843416 1437114 cri.go:89] found id: ""
	I1209 05:55:35.843460 1437114 logs.go:282] 0 containers: []
	W1209 05:55:35.843469 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:55:35.843481 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:55:35.843555 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:55:35.868194 1437114 cri.go:89] found id: ""
	I1209 05:55:35.868221 1437114 logs.go:282] 0 containers: []
	W1209 05:55:35.868231 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:55:35.868239 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:55:35.868251 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:55:35.925041 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:55:35.925080 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:55:35.951129 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:55:35.951341 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:55:36.030987 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:55:36.022457   10271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:36.023229   10271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:36.023993   10271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:36.025131   10271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:36.025699   10271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:55:36.022457   10271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:36.023229   10271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:36.023993   10271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:36.025131   10271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:36.025699   10271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:55:36.031012 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:55:36.031026 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:55:36.058849 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:55:36.058884 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:55:38.588358 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:55:38.598423 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:55:38.598488 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:55:38.622572 1437114 cri.go:89] found id: ""
	I1209 05:55:38.622596 1437114 logs.go:282] 0 containers: []
	W1209 05:55:38.622605 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:55:38.622612 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:55:38.622669 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:55:38.650917 1437114 cri.go:89] found id: ""
	I1209 05:55:38.650942 1437114 logs.go:282] 0 containers: []
	W1209 05:55:38.650950 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:55:38.650956 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:55:38.651013 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:55:38.677402 1437114 cri.go:89] found id: ""
	I1209 05:55:38.677435 1437114 logs.go:282] 0 containers: []
	W1209 05:55:38.677444 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:55:38.677451 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:55:38.677558 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:55:38.707295 1437114 cri.go:89] found id: ""
	I1209 05:55:38.707328 1437114 logs.go:282] 0 containers: []
	W1209 05:55:38.707337 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:55:38.707344 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:55:38.707453 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:55:38.740627 1437114 cri.go:89] found id: ""
	I1209 05:55:38.740652 1437114 logs.go:282] 0 containers: []
	W1209 05:55:38.740660 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:55:38.740667 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:55:38.740727 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:55:38.764991 1437114 cri.go:89] found id: ""
	I1209 05:55:38.765017 1437114 logs.go:282] 0 containers: []
	W1209 05:55:38.765027 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:55:38.765033 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:55:38.765095 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:55:38.789303 1437114 cri.go:89] found id: ""
	I1209 05:55:38.789328 1437114 logs.go:282] 0 containers: []
	W1209 05:55:38.789336 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:55:38.789343 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:55:38.789401 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:55:38.812509 1437114 cri.go:89] found id: ""
	I1209 05:55:38.812533 1437114 logs.go:282] 0 containers: []
	W1209 05:55:38.812541 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:55:38.812551 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:55:38.812562 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:55:38.869277 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:55:38.869309 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:55:38.885634 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:55:38.885663 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:55:38.967787 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:55:38.957406   10382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:38.958335   10382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:38.960358   10382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:38.961032   10382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:38.963013   10382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:55:38.957406   10382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:38.958335   10382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:38.960358   10382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:38.961032   10382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:38.963013   10382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:55:38.967812 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:55:38.967828 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:55:39.000576 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:55:39.000615 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:55:41.533393 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:55:41.544133 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:55:41.544208 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:55:41.569392 1437114 cri.go:89] found id: ""
	I1209 05:55:41.569418 1437114 logs.go:282] 0 containers: []
	W1209 05:55:41.569428 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:55:41.569436 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:55:41.569499 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:55:41.595491 1437114 cri.go:89] found id: ""
	I1209 05:55:41.595517 1437114 logs.go:282] 0 containers: []
	W1209 05:55:41.595526 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:55:41.595532 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:55:41.595592 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:55:41.622211 1437114 cri.go:89] found id: ""
	I1209 05:55:41.622246 1437114 logs.go:282] 0 containers: []
	W1209 05:55:41.622256 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:55:41.622263 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:55:41.622323 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:55:41.646745 1437114 cri.go:89] found id: ""
	I1209 05:55:41.646770 1437114 logs.go:282] 0 containers: []
	W1209 05:55:41.646779 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:55:41.646785 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:55:41.646846 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:55:41.674665 1437114 cri.go:89] found id: ""
	I1209 05:55:41.674689 1437114 logs.go:282] 0 containers: []
	W1209 05:55:41.674699 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:55:41.674706 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:55:41.674768 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:55:41.702586 1437114 cri.go:89] found id: ""
	I1209 05:55:41.702610 1437114 logs.go:282] 0 containers: []
	W1209 05:55:41.702619 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:55:41.702628 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:55:41.702704 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:55:41.741493 1437114 cri.go:89] found id: ""
	I1209 05:55:41.741515 1437114 logs.go:282] 0 containers: []
	W1209 05:55:41.741523 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:55:41.741530 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:55:41.741666 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:55:41.768353 1437114 cri.go:89] found id: ""
	I1209 05:55:41.768465 1437114 logs.go:282] 0 containers: []
	W1209 05:55:41.768479 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:55:41.768490 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:55:41.768529 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:55:41.831484 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:55:41.823412   10488 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:41.824163   10488 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:41.825769   10488 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:41.826063   10488 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:41.827557   10488 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:55:41.823412   10488 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:41.824163   10488 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:41.825769   10488 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:41.826063   10488 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:41.827557   10488 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:55:41.831504 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:55:41.831517 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:55:41.857187 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:55:41.857222 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:55:41.887092 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:55:41.887123 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:55:41.943306 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:55:41.943341 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:55:44.461424 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:55:44.472240 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:55:44.472340 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:55:44.498935 1437114 cri.go:89] found id: ""
	I1209 05:55:44.498961 1437114 logs.go:282] 0 containers: []
	W1209 05:55:44.498970 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:55:44.498976 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:55:44.499034 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:55:44.523535 1437114 cri.go:89] found id: ""
	I1209 05:55:44.523564 1437114 logs.go:282] 0 containers: []
	W1209 05:55:44.523573 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:55:44.523579 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:55:44.523637 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:55:44.548432 1437114 cri.go:89] found id: ""
	I1209 05:55:44.548455 1437114 logs.go:282] 0 containers: []
	W1209 05:55:44.548463 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:55:44.548469 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:55:44.548526 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:55:44.573002 1437114 cri.go:89] found id: ""
	I1209 05:55:44.573024 1437114 logs.go:282] 0 containers: []
	W1209 05:55:44.573034 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:55:44.573040 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:55:44.573098 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:55:44.596807 1437114 cri.go:89] found id: ""
	I1209 05:55:44.596829 1437114 logs.go:282] 0 containers: []
	W1209 05:55:44.596838 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:55:44.596846 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:55:44.596901 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:55:44.624387 1437114 cri.go:89] found id: ""
	I1209 05:55:44.624456 1437114 logs.go:282] 0 containers: []
	W1209 05:55:44.624478 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:55:44.624492 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:55:44.624571 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:55:44.648117 1437114 cri.go:89] found id: ""
	I1209 05:55:44.648143 1437114 logs.go:282] 0 containers: []
	W1209 05:55:44.648151 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:55:44.648158 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:55:44.648229 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:55:44.671908 1437114 cri.go:89] found id: ""
	I1209 05:55:44.671939 1437114 logs.go:282] 0 containers: []
	W1209 05:55:44.671948 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:55:44.671972 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:55:44.671989 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:55:44.732458 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:55:44.732536 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:55:44.753248 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:55:44.753273 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:55:44.822117 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:55:44.814788   10605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:44.815161   10605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:44.816602   10605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:44.816898   10605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:44.818170   10605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:55:44.814788   10605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:44.815161   10605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:44.816602   10605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:44.816898   10605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:44.818170   10605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:55:44.822137 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:55:44.822149 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:55:44.848565 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:55:44.848600 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:55:47.376875 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:55:47.386961 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:55:47.387031 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:55:47.413420 1437114 cri.go:89] found id: ""
	I1209 05:55:47.413444 1437114 logs.go:282] 0 containers: []
	W1209 05:55:47.413452 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:55:47.413458 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:55:47.413519 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:55:47.441969 1437114 cri.go:89] found id: ""
	I1209 05:55:47.442001 1437114 logs.go:282] 0 containers: []
	W1209 05:55:47.442010 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:55:47.442016 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:55:47.442081 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:55:47.465166 1437114 cri.go:89] found id: ""
	I1209 05:55:47.465195 1437114 logs.go:282] 0 containers: []
	W1209 05:55:47.465210 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:55:47.465216 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:55:47.465283 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:55:47.493704 1437114 cri.go:89] found id: ""
	I1209 05:55:47.493730 1437114 logs.go:282] 0 containers: []
	W1209 05:55:47.493739 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:55:47.493745 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:55:47.493821 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:55:47.519554 1437114 cri.go:89] found id: ""
	I1209 05:55:47.519589 1437114 logs.go:282] 0 containers: []
	W1209 05:55:47.519598 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:55:47.519604 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:55:47.519671 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:55:47.549334 1437114 cri.go:89] found id: ""
	I1209 05:55:47.549367 1437114 logs.go:282] 0 containers: []
	W1209 05:55:47.549376 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:55:47.549383 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:55:47.549456 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:55:47.578946 1437114 cri.go:89] found id: ""
	I1209 05:55:47.578980 1437114 logs.go:282] 0 containers: []
	W1209 05:55:47.578989 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:55:47.578995 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:55:47.579062 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:55:47.603683 1437114 cri.go:89] found id: ""
	I1209 05:55:47.603716 1437114 logs.go:282] 0 containers: []
	W1209 05:55:47.603725 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:55:47.603734 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:55:47.603745 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:55:47.619447 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:55:47.619482 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:55:47.687529 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:55:47.675579   10710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:47.676174   10710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:47.679656   10710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:47.680257   10710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:47.681964   10710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:55:47.675579   10710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:47.676174   10710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:47.679656   10710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:47.680257   10710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:47.681964   10710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:55:47.687594 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:55:47.687641 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:55:47.715721 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:55:47.715792 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:55:47.745866 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:55:47.745889 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:55:50.305015 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:55:50.315642 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:55:50.315787 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:55:50.341274 1437114 cri.go:89] found id: ""
	I1209 05:55:50.341298 1437114 logs.go:282] 0 containers: []
	W1209 05:55:50.341306 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:55:50.341314 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:55:50.341370 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:55:50.366500 1437114 cri.go:89] found id: ""
	I1209 05:55:50.366533 1437114 logs.go:282] 0 containers: []
	W1209 05:55:50.366542 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:55:50.366548 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:55:50.366613 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:55:50.390751 1437114 cri.go:89] found id: ""
	I1209 05:55:50.390787 1437114 logs.go:282] 0 containers: []
	W1209 05:55:50.390796 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:55:50.390802 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:55:50.390867 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:55:50.418576 1437114 cri.go:89] found id: ""
	I1209 05:55:50.418601 1437114 logs.go:282] 0 containers: []
	W1209 05:55:50.418610 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:55:50.418616 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:55:50.418683 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:55:50.447207 1437114 cri.go:89] found id: ""
	I1209 05:55:50.447250 1437114 logs.go:282] 0 containers: []
	W1209 05:55:50.447261 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:55:50.447267 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:55:50.447339 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:55:50.476321 1437114 cri.go:89] found id: ""
	I1209 05:55:50.476346 1437114 logs.go:282] 0 containers: []
	W1209 05:55:50.476354 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:55:50.476372 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:55:50.476430 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:55:50.501331 1437114 cri.go:89] found id: ""
	I1209 05:55:50.501356 1437114 logs.go:282] 0 containers: []
	W1209 05:55:50.501365 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:55:50.501371 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:55:50.501439 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:55:50.525182 1437114 cri.go:89] found id: ""
	I1209 05:55:50.525207 1437114 logs.go:282] 0 containers: []
	W1209 05:55:50.525215 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:55:50.525224 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:55:50.525262 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:55:50.584512 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:55:50.584550 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:55:50.600341 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:55:50.600369 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:55:50.667248 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:55:50.658895   10823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:50.659509   10823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:50.661016   10823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:50.661529   10823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:50.663114   10823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:55:50.658895   10823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:50.659509   10823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:50.661016   10823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:50.661529   10823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:50.663114   10823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:55:50.667314 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:55:50.667346 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:55:50.695874 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:55:50.695911 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:55:53.232139 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:55:53.242299 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:55:53.242369 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:55:53.265738 1437114 cri.go:89] found id: ""
	I1209 05:55:53.265763 1437114 logs.go:282] 0 containers: []
	W1209 05:55:53.265771 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:55:53.265777 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:55:53.265834 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:55:53.289547 1437114 cri.go:89] found id: ""
	I1209 05:55:53.289571 1437114 logs.go:282] 0 containers: []
	W1209 05:55:53.289580 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:55:53.289586 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:55:53.289644 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:55:53.314432 1437114 cri.go:89] found id: ""
	I1209 05:55:53.314457 1437114 logs.go:282] 0 containers: []
	W1209 05:55:53.314466 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:55:53.314472 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:55:53.314529 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:55:53.338078 1437114 cri.go:89] found id: ""
	I1209 05:55:53.338100 1437114 logs.go:282] 0 containers: []
	W1209 05:55:53.338109 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:55:53.338115 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:55:53.338190 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:55:53.362597 1437114 cri.go:89] found id: ""
	I1209 05:55:53.362623 1437114 logs.go:282] 0 containers: []
	W1209 05:55:53.362632 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:55:53.362638 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:55:53.362700 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:55:53.387075 1437114 cri.go:89] found id: ""
	I1209 05:55:53.387100 1437114 logs.go:282] 0 containers: []
	W1209 05:55:53.387108 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:55:53.387115 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:55:53.387181 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:55:53.410813 1437114 cri.go:89] found id: ""
	I1209 05:55:53.410836 1437114 logs.go:282] 0 containers: []
	W1209 05:55:53.410845 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:55:53.410850 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:55:53.410910 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:55:53.439085 1437114 cri.go:89] found id: ""
	I1209 05:55:53.439107 1437114 logs.go:282] 0 containers: []
	W1209 05:55:53.439115 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:55:53.439124 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:55:53.439135 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:55:53.496416 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:55:53.496450 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:55:53.512950 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:55:53.512979 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:55:53.592134 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:55:53.583228   10933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:53.583903   10933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:53.585634   10933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:53.586183   10933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:53.587806   10933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:55:53.583228   10933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:53.583903   10933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:53.585634   10933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:53.586183   10933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:53.587806   10933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:55:53.592155 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:55:53.592168 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:55:53.620855 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:55:53.620901 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:55:56.151858 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:55:56.162360 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:55:56.162444 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:55:56.192447 1437114 cri.go:89] found id: ""
	I1209 05:55:56.192474 1437114 logs.go:282] 0 containers: []
	W1209 05:55:56.192482 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:55:56.192488 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:55:56.192545 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:55:56.230900 1437114 cri.go:89] found id: ""
	I1209 05:55:56.230927 1437114 logs.go:282] 0 containers: []
	W1209 05:55:56.230935 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:55:56.230941 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:55:56.231005 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:55:56.264649 1437114 cri.go:89] found id: ""
	I1209 05:55:56.264673 1437114 logs.go:282] 0 containers: []
	W1209 05:55:56.264683 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:55:56.264689 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:55:56.264748 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:55:56.287754 1437114 cri.go:89] found id: ""
	I1209 05:55:56.287780 1437114 logs.go:282] 0 containers: []
	W1209 05:55:56.287788 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:55:56.287794 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:55:56.287851 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:55:56.311939 1437114 cri.go:89] found id: ""
	I1209 05:55:56.311966 1437114 logs.go:282] 0 containers: []
	W1209 05:55:56.311974 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:55:56.311981 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:55:56.312071 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:55:56.336812 1437114 cri.go:89] found id: ""
	I1209 05:55:56.336847 1437114 logs.go:282] 0 containers: []
	W1209 05:55:56.336856 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:55:56.336862 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:55:56.336926 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:55:56.364355 1437114 cri.go:89] found id: ""
	I1209 05:55:56.364378 1437114 logs.go:282] 0 containers: []
	W1209 05:55:56.364387 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:55:56.364394 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:55:56.364451 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:55:56.388220 1437114 cri.go:89] found id: ""
	I1209 05:55:56.388242 1437114 logs.go:282] 0 containers: []
	W1209 05:55:56.388251 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:55:56.388260 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:55:56.388272 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:55:56.451922 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:55:56.443739   11039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:56.444234   11039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:56.445911   11039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:56.446440   11039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:56.448091   11039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:55:56.443739   11039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:56.444234   11039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:56.445911   11039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:56.446440   11039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:56.448091   11039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:55:56.451941 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:55:56.451955 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:55:56.477213 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:55:56.477256 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:55:56.504874 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:55:56.504908 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:55:56.561753 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:55:56.561793 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:55:59.078916 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:55:59.089470 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:55:59.089545 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:55:59.113298 1437114 cri.go:89] found id: ""
	I1209 05:55:59.113324 1437114 logs.go:282] 0 containers: []
	W1209 05:55:59.113332 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:55:59.113339 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:55:59.113402 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:55:59.141250 1437114 cri.go:89] found id: ""
	I1209 05:55:59.141278 1437114 logs.go:282] 0 containers: []
	W1209 05:55:59.141286 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:55:59.141292 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:55:59.141351 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:55:59.170020 1437114 cri.go:89] found id: ""
	I1209 05:55:59.170044 1437114 logs.go:282] 0 containers: []
	W1209 05:55:59.170052 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:55:59.170059 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:55:59.170122 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:55:59.210757 1437114 cri.go:89] found id: ""
	I1209 05:55:59.210792 1437114 logs.go:282] 0 containers: []
	W1209 05:55:59.210801 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:55:59.210808 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:55:59.210873 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:55:59.237433 1437114 cri.go:89] found id: ""
	I1209 05:55:59.237470 1437114 logs.go:282] 0 containers: []
	W1209 05:55:59.237479 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:55:59.237486 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:55:59.237551 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:55:59.263923 1437114 cri.go:89] found id: ""
	I1209 05:55:59.263959 1437114 logs.go:282] 0 containers: []
	W1209 05:55:59.263968 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:55:59.263975 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:55:59.264071 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:55:59.288850 1437114 cri.go:89] found id: ""
	I1209 05:55:59.288916 1437114 logs.go:282] 0 containers: []
	W1209 05:55:59.288940 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:55:59.288954 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:55:59.289029 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:55:59.316320 1437114 cri.go:89] found id: ""
	I1209 05:55:59.316347 1437114 logs.go:282] 0 containers: []
	W1209 05:55:59.316356 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:55:59.316365 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:55:59.316376 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:55:59.383644 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:55:59.373968   11152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:59.374816   11152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:59.376482   11152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:59.376830   11152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:59.378974   11152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:55:59.373968   11152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:59.374816   11152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:59.376482   11152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:59.376830   11152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:59.378974   11152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:55:59.383666 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:55:59.383680 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:55:59.409556 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:55:59.409591 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:55:59.440707 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:55:59.440737 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:55:59.496851 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:55:59.496887 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:56:02.013397 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:56:02.023815 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:56:02.023883 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:56:02.054212 1437114 cri.go:89] found id: ""
	I1209 05:56:02.054240 1437114 logs.go:282] 0 containers: []
	W1209 05:56:02.054249 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:56:02.054255 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:56:02.054323 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:56:02.079245 1437114 cri.go:89] found id: ""
	I1209 05:56:02.079274 1437114 logs.go:282] 0 containers: []
	W1209 05:56:02.079283 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:56:02.079289 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:56:02.079347 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:56:02.104356 1437114 cri.go:89] found id: ""
	I1209 05:56:02.104399 1437114 logs.go:282] 0 containers: []
	W1209 05:56:02.104408 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:56:02.104415 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:56:02.104478 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:56:02.129688 1437114 cri.go:89] found id: ""
	I1209 05:56:02.129753 1437114 logs.go:282] 0 containers: []
	W1209 05:56:02.129777 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:56:02.129795 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:56:02.129886 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:56:02.159435 1437114 cri.go:89] found id: ""
	I1209 05:56:02.159463 1437114 logs.go:282] 0 containers: []
	W1209 05:56:02.159471 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:56:02.159478 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:56:02.159537 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:56:02.193945 1437114 cri.go:89] found id: ""
	I1209 05:56:02.193969 1437114 logs.go:282] 0 containers: []
	W1209 05:56:02.193987 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:56:02.193994 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:56:02.194093 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:56:02.230499 1437114 cri.go:89] found id: ""
	I1209 05:56:02.230528 1437114 logs.go:282] 0 containers: []
	W1209 05:56:02.230542 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:56:02.230549 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:56:02.230650 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:56:02.261955 1437114 cri.go:89] found id: ""
	I1209 05:56:02.262021 1437114 logs.go:282] 0 containers: []
	W1209 05:56:02.262046 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:56:02.262063 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:56:02.262075 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:56:02.278208 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:56:02.278245 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:56:02.342511 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:56:02.334138   11267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:02.334823   11267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:02.336452   11267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:02.337017   11267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:02.338543   11267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:56:02.334138   11267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:02.334823   11267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:02.336452   11267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:02.337017   11267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:02.338543   11267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:56:02.342581 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:56:02.342603 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:56:02.367883 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:56:02.367920 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:56:02.398560 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:56:02.398587 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:56:04.956142 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:56:04.966664 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:56:04.966728 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:56:05.000769 1437114 cri.go:89] found id: ""
	I1209 05:56:05.000792 1437114 logs.go:282] 0 containers: []
	W1209 05:56:05.000801 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:56:05.000807 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:56:05.000868 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:56:05.030686 1437114 cri.go:89] found id: ""
	I1209 05:56:05.030713 1437114 logs.go:282] 0 containers: []
	W1209 05:56:05.030726 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:56:05.030733 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:56:05.030792 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:56:05.055515 1437114 cri.go:89] found id: ""
	I1209 05:56:05.055541 1437114 logs.go:282] 0 containers: []
	W1209 05:56:05.055550 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:56:05.055556 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:56:05.055614 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:56:05.080557 1437114 cri.go:89] found id: ""
	I1209 05:56:05.080584 1437114 logs.go:282] 0 containers: []
	W1209 05:56:05.080593 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:56:05.080599 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:56:05.080659 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:56:05.106686 1437114 cri.go:89] found id: ""
	I1209 05:56:05.106714 1437114 logs.go:282] 0 containers: []
	W1209 05:56:05.106724 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:56:05.106731 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:56:05.106792 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:56:05.131985 1437114 cri.go:89] found id: ""
	I1209 05:56:05.132044 1437114 logs.go:282] 0 containers: []
	W1209 05:56:05.132053 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:56:05.132060 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:56:05.132127 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:56:05.158936 1437114 cri.go:89] found id: ""
	I1209 05:56:05.159002 1437114 logs.go:282] 0 containers: []
	W1209 05:56:05.159027 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:56:05.159045 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:56:05.159134 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:56:05.186586 1437114 cri.go:89] found id: ""
	I1209 05:56:05.186658 1437114 logs.go:282] 0 containers: []
	W1209 05:56:05.186682 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:56:05.186704 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:56:05.186745 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:56:05.252531 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:56:05.252568 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:56:05.268794 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:56:05.268823 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:56:05.330847 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:56:05.322209   11379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:05.322901   11379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:05.324496   11379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:05.325041   11379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:05.326643   11379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:56:05.322209   11379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:05.322901   11379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:05.324496   11379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:05.325041   11379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:05.326643   11379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:56:05.330870 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:56:05.330882 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:56:05.356845 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:56:05.356877 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:56:07.894100 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:56:07.904726 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:56:07.904808 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:56:07.934685 1437114 cri.go:89] found id: ""
	I1209 05:56:07.934707 1437114 logs.go:282] 0 containers: []
	W1209 05:56:07.934715 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:56:07.934727 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:56:07.934786 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:56:07.966688 1437114 cri.go:89] found id: ""
	I1209 05:56:07.966715 1437114 logs.go:282] 0 containers: []
	W1209 05:56:07.966724 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:56:07.966730 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:56:07.966791 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:56:07.997688 1437114 cri.go:89] found id: ""
	I1209 05:56:07.997718 1437114 logs.go:282] 0 containers: []
	W1209 05:56:07.997727 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:56:07.997733 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:56:07.997794 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:56:08.028703 1437114 cri.go:89] found id: ""
	I1209 05:56:08.028738 1437114 logs.go:282] 0 containers: []
	W1209 05:56:08.028748 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:56:08.028756 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:56:08.028836 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:56:08.055186 1437114 cri.go:89] found id: ""
	I1209 05:56:08.055216 1437114 logs.go:282] 0 containers: []
	W1209 05:56:08.055225 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:56:08.055232 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:56:08.055298 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:56:08.081977 1437114 cri.go:89] found id: ""
	I1209 05:56:08.082005 1437114 logs.go:282] 0 containers: []
	W1209 05:56:08.082014 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:56:08.082020 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:56:08.082094 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:56:08.106694 1437114 cri.go:89] found id: ""
	I1209 05:56:08.106719 1437114 logs.go:282] 0 containers: []
	W1209 05:56:08.106728 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:56:08.106735 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:56:08.106794 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:56:08.131242 1437114 cri.go:89] found id: ""
	I1209 05:56:08.131266 1437114 logs.go:282] 0 containers: []
	W1209 05:56:08.131274 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:56:08.131284 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:56:08.131296 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:56:08.200236 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:56:08.191954   11490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:08.192809   11490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:08.194381   11490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:08.194676   11490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:08.196205   11490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:56:08.191954   11490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:08.192809   11490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:08.194381   11490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:08.194676   11490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:08.196205   11490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:56:08.200261 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:56:08.200275 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:56:08.228642 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:56:08.228684 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:56:08.262181 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:56:08.262210 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:56:08.316796 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:56:08.316828 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:56:10.832826 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:56:10.843625 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:56:10.843696 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:56:10.867741 1437114 cri.go:89] found id: ""
	I1209 05:56:10.867808 1437114 logs.go:282] 0 containers: []
	W1209 05:56:10.867832 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:56:10.867854 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:56:10.867940 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:56:10.893251 1437114 cri.go:89] found id: ""
	I1209 05:56:10.893284 1437114 logs.go:282] 0 containers: []
	W1209 05:56:10.893292 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:56:10.893298 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:56:10.893357 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:56:10.921874 1437114 cri.go:89] found id: ""
	I1209 05:56:10.921897 1437114 logs.go:282] 0 containers: []
	W1209 05:56:10.921906 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:56:10.921912 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:56:10.921977 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:56:10.948235 1437114 cri.go:89] found id: ""
	I1209 05:56:10.948257 1437114 logs.go:282] 0 containers: []
	W1209 05:56:10.948272 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:56:10.948279 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:56:10.948337 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:56:10.977204 1437114 cri.go:89] found id: ""
	I1209 05:56:10.977226 1437114 logs.go:282] 0 containers: []
	W1209 05:56:10.977234 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:56:10.977239 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:56:10.977298 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:56:11.011653 1437114 cri.go:89] found id: ""
	I1209 05:56:11.011677 1437114 logs.go:282] 0 containers: []
	W1209 05:56:11.011685 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:56:11.011692 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:56:11.011753 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:56:11.038552 1437114 cri.go:89] found id: ""
	I1209 05:56:11.038575 1437114 logs.go:282] 0 containers: []
	W1209 05:56:11.038584 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:56:11.038589 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:56:11.038648 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:56:11.068058 1437114 cri.go:89] found id: ""
	I1209 05:56:11.068081 1437114 logs.go:282] 0 containers: []
	W1209 05:56:11.068089 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:56:11.068098 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:56:11.068109 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:56:11.124172 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:56:11.124208 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:56:11.140275 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:56:11.140316 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:56:11.220317 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:56:11.212396   11608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:11.213001   11608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:11.214543   11608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:11.215016   11608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:11.216494   11608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:56:11.212396   11608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:11.213001   11608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:11.214543   11608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:11.215016   11608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:11.216494   11608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:56:11.220349 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:56:11.220362 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:56:11.245629 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:56:11.245662 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:56:13.776003 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:56:13.786369 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:56:13.786448 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:56:13.809520 1437114 cri.go:89] found id: ""
	I1209 05:56:13.809544 1437114 logs.go:282] 0 containers: []
	W1209 05:56:13.809553 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:56:13.809559 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:56:13.809618 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:56:13.833347 1437114 cri.go:89] found id: ""
	I1209 05:56:13.833370 1437114 logs.go:282] 0 containers: []
	W1209 05:56:13.833378 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:56:13.833384 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:56:13.833446 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:56:13.857799 1437114 cri.go:89] found id: ""
	I1209 05:56:13.857830 1437114 logs.go:282] 0 containers: []
	W1209 05:56:13.857840 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:56:13.857846 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:56:13.857906 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:56:13.882625 1437114 cri.go:89] found id: ""
	I1209 05:56:13.882658 1437114 logs.go:282] 0 containers: []
	W1209 05:56:13.882667 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:56:13.882673 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:56:13.882742 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:56:13.910846 1437114 cri.go:89] found id: ""
	I1209 05:56:13.910880 1437114 logs.go:282] 0 containers: []
	W1209 05:56:13.910889 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:56:13.910895 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:56:13.910962 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:56:13.942418 1437114 cri.go:89] found id: ""
	I1209 05:56:13.942483 1437114 logs.go:282] 0 containers: []
	W1209 05:56:13.942510 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:56:13.942528 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:56:13.942615 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:56:13.972617 1437114 cri.go:89] found id: ""
	I1209 05:56:13.972686 1437114 logs.go:282] 0 containers: []
	W1209 05:56:13.972710 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:56:13.972728 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:56:13.972814 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:56:14.010643 1437114 cri.go:89] found id: ""
	I1209 05:56:14.010672 1437114 logs.go:282] 0 containers: []
	W1209 05:56:14.010690 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:56:14.010712 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:56:14.010743 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:56:14.045403 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:56:14.045489 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:56:14.103757 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:56:14.103793 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:56:14.119622 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:56:14.119648 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:56:14.199726 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:56:14.184447   11733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:14.191243   11733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:14.191781   11733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:14.193355   11733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:14.193912   11733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:56:14.184447   11733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:14.191243   11733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:14.191781   11733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:14.193355   11733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:14.193912   11733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:56:14.199794 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:56:14.199821 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:56:16.729940 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:56:16.740423 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:56:16.740497 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:56:16.765729 1437114 cri.go:89] found id: ""
	I1209 05:56:16.765755 1437114 logs.go:282] 0 containers: []
	W1209 05:56:16.765763 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:56:16.765770 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:56:16.765831 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:56:16.793724 1437114 cri.go:89] found id: ""
	I1209 05:56:16.793750 1437114 logs.go:282] 0 containers: []
	W1209 05:56:16.793759 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:56:16.793765 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:56:16.793824 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:56:16.821402 1437114 cri.go:89] found id: ""
	I1209 05:56:16.821429 1437114 logs.go:282] 0 containers: []
	W1209 05:56:16.821437 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:56:16.821444 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:56:16.821504 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:56:16.846074 1437114 cri.go:89] found id: ""
	I1209 05:56:16.846101 1437114 logs.go:282] 0 containers: []
	W1209 05:56:16.846110 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:56:16.846116 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:56:16.846175 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:56:16.870665 1437114 cri.go:89] found id: ""
	I1209 05:56:16.870689 1437114 logs.go:282] 0 containers: []
	W1209 05:56:16.870698 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:56:16.870705 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:56:16.870785 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:56:16.894509 1437114 cri.go:89] found id: ""
	I1209 05:56:16.894542 1437114 logs.go:282] 0 containers: []
	W1209 05:56:16.894550 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:56:16.894557 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:56:16.894651 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:56:16.921935 1437114 cri.go:89] found id: ""
	I1209 05:56:16.921962 1437114 logs.go:282] 0 containers: []
	W1209 05:56:16.921971 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:56:16.921977 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:56:16.922049 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:56:16.950536 1437114 cri.go:89] found id: ""
	I1209 05:56:16.950570 1437114 logs.go:282] 0 containers: []
	W1209 05:56:16.950579 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:56:16.950588 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:56:16.950599 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:56:17.008406 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:56:17.008442 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:56:17.024072 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:56:17.024098 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:56:17.089436 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:56:17.080816   11835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:17.081612   11835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:17.083298   11835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:17.083818   11835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:17.085479   11835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:56:17.080816   11835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:17.081612   11835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:17.083298   11835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:17.083818   11835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:17.085479   11835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:56:17.089456 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:56:17.089468 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:56:17.114751 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:56:17.114785 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:56:19.649189 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:56:19.659355 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:56:19.659709 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:56:19.687359 1437114 cri.go:89] found id: ""
	I1209 05:56:19.687393 1437114 logs.go:282] 0 containers: []
	W1209 05:56:19.687402 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:56:19.687408 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:56:19.687482 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:56:19.711167 1437114 cri.go:89] found id: ""
	I1209 05:56:19.711241 1437114 logs.go:282] 0 containers: []
	W1209 05:56:19.711264 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:56:19.711282 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:56:19.711361 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:56:19.734776 1437114 cri.go:89] found id: ""
	I1209 05:56:19.734843 1437114 logs.go:282] 0 containers: []
	W1209 05:56:19.734868 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:56:19.734886 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:56:19.734978 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:56:19.758075 1437114 cri.go:89] found id: ""
	I1209 05:56:19.758101 1437114 logs.go:282] 0 containers: []
	W1209 05:56:19.758111 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:56:19.758117 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:56:19.758191 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:56:19.781866 1437114 cri.go:89] found id: ""
	I1209 05:56:19.781889 1437114 logs.go:282] 0 containers: []
	W1209 05:56:19.781897 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:56:19.781903 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:56:19.782011 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:56:19.806779 1437114 cri.go:89] found id: ""
	I1209 05:56:19.806811 1437114 logs.go:282] 0 containers: []
	W1209 05:56:19.806820 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:56:19.806827 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:56:19.806896 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:56:19.830892 1437114 cri.go:89] found id: ""
	I1209 05:56:19.830931 1437114 logs.go:282] 0 containers: []
	W1209 05:56:19.830940 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:56:19.830946 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:56:19.831013 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:56:19.855119 1437114 cri.go:89] found id: ""
	I1209 05:56:19.855151 1437114 logs.go:282] 0 containers: []
	W1209 05:56:19.855160 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:56:19.855168 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:56:19.855180 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:56:19.918437 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:56:19.910860   11934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:19.911393   11934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:19.912883   11934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:19.913323   11934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:19.914743   11934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:56:19.910860   11934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:19.911393   11934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:19.912883   11934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:19.913323   11934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:19.914743   11934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:56:19.918456 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:56:19.918468 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:56:19.948986 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:56:19.949022 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:56:19.983513 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:56:19.983543 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:56:20.044570 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:56:20.044611 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:56:22.561138 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:56:22.571631 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:56:22.571701 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:56:22.597481 1437114 cri.go:89] found id: ""
	I1209 05:56:22.597507 1437114 logs.go:282] 0 containers: []
	W1209 05:56:22.597516 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:56:22.597522 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:56:22.597583 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:56:22.620910 1437114 cri.go:89] found id: ""
	I1209 05:56:22.620934 1437114 logs.go:282] 0 containers: []
	W1209 05:56:22.620942 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:56:22.620948 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:56:22.621010 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:56:22.645762 1437114 cri.go:89] found id: ""
	I1209 05:56:22.645786 1437114 logs.go:282] 0 containers: []
	W1209 05:56:22.645794 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:56:22.645802 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:56:22.645860 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:56:22.674030 1437114 cri.go:89] found id: ""
	I1209 05:56:22.674055 1437114 logs.go:282] 0 containers: []
	W1209 05:56:22.674063 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:56:22.674069 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:56:22.674129 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:56:22.697420 1437114 cri.go:89] found id: ""
	I1209 05:56:22.697483 1437114 logs.go:282] 0 containers: []
	W1209 05:56:22.697498 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:56:22.697505 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:56:22.697572 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:56:22.721275 1437114 cri.go:89] found id: ""
	I1209 05:56:22.721303 1437114 logs.go:282] 0 containers: []
	W1209 05:56:22.721311 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:56:22.721318 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:56:22.721375 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:56:22.751174 1437114 cri.go:89] found id: ""
	I1209 05:56:22.751207 1437114 logs.go:282] 0 containers: []
	W1209 05:56:22.751216 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:56:22.751223 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:56:22.751297 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:56:22.783334 1437114 cri.go:89] found id: ""
	I1209 05:56:22.783359 1437114 logs.go:282] 0 containers: []
	W1209 05:56:22.783368 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:56:22.783377 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:56:22.783388 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:56:22.798903 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:56:22.798931 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:56:22.863930 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:56:22.855168   12050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:22.855903   12050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:22.857473   12050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:22.858541   12050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:22.859308   12050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:56:22.855168   12050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:22.855903   12050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:22.857473   12050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:22.858541   12050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:22.859308   12050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:56:22.863951 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:56:22.863964 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:56:22.889010 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:56:22.889044 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:56:22.917472 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:56:22.917497 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:56:25.477751 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:56:25.488155 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:56:25.488227 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:56:25.513691 1437114 cri.go:89] found id: ""
	I1209 05:56:25.513726 1437114 logs.go:282] 0 containers: []
	W1209 05:56:25.513735 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:56:25.513742 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:56:25.513815 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:56:25.538394 1437114 cri.go:89] found id: ""
	I1209 05:56:25.538426 1437114 logs.go:282] 0 containers: []
	W1209 05:56:25.538434 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:56:25.538441 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:56:25.538507 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:56:25.565992 1437114 cri.go:89] found id: ""
	I1209 05:56:25.566014 1437114 logs.go:282] 0 containers: []
	W1209 05:56:25.566023 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:56:25.566028 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:56:25.566084 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:56:25.594238 1437114 cri.go:89] found id: ""
	I1209 05:56:25.594273 1437114 logs.go:282] 0 containers: []
	W1209 05:56:25.594283 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:56:25.594289 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:56:25.594357 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:56:25.618528 1437114 cri.go:89] found id: ""
	I1209 05:56:25.618554 1437114 logs.go:282] 0 containers: []
	W1209 05:56:25.618562 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:56:25.618569 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:56:25.618630 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:56:25.645761 1437114 cri.go:89] found id: ""
	I1209 05:56:25.645793 1437114 logs.go:282] 0 containers: []
	W1209 05:56:25.645802 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:56:25.645809 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:56:25.645868 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:56:25.673275 1437114 cri.go:89] found id: ""
	I1209 05:56:25.673303 1437114 logs.go:282] 0 containers: []
	W1209 05:56:25.673313 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:56:25.673320 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:56:25.673378 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:56:25.698776 1437114 cri.go:89] found id: ""
	I1209 05:56:25.698801 1437114 logs.go:282] 0 containers: []
	W1209 05:56:25.698810 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:56:25.698819 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:56:25.698831 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:56:25.758726 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:56:25.758763 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:56:25.774459 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:56:25.774498 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:56:25.837634 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:56:25.829791   12165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:25.830357   12165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:25.831894   12165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:25.832310   12165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:25.833747   12165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:56:25.829791   12165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:25.830357   12165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:25.831894   12165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:25.832310   12165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:25.833747   12165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:56:25.837654 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:56:25.837666 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:56:25.863059 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:56:25.863089 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:56:28.390209 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:56:28.400783 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:56:28.400858 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:56:28.431158 1437114 cri.go:89] found id: ""
	I1209 05:56:28.431186 1437114 logs.go:282] 0 containers: []
	W1209 05:56:28.431195 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:56:28.431201 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:56:28.431257 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:56:28.466252 1437114 cri.go:89] found id: ""
	I1209 05:56:28.466304 1437114 logs.go:282] 0 containers: []
	W1209 05:56:28.466313 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:56:28.466319 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:56:28.466387 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:56:28.495101 1437114 cri.go:89] found id: ""
	I1209 05:56:28.495128 1437114 logs.go:282] 0 containers: []
	W1209 05:56:28.495135 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:56:28.495141 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:56:28.495205 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:56:28.519814 1437114 cri.go:89] found id: ""
	I1209 05:56:28.519840 1437114 logs.go:282] 0 containers: []
	W1209 05:56:28.519848 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:56:28.519854 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:56:28.519917 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:56:28.545987 1437114 cri.go:89] found id: ""
	I1209 05:56:28.546014 1437114 logs.go:282] 0 containers: []
	W1209 05:56:28.546022 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:56:28.546029 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:56:28.546087 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:56:28.569653 1437114 cri.go:89] found id: ""
	I1209 05:56:28.569677 1437114 logs.go:282] 0 containers: []
	W1209 05:56:28.569686 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:56:28.569693 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:56:28.569750 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:56:28.592506 1437114 cri.go:89] found id: ""
	I1209 05:56:28.592531 1437114 logs.go:282] 0 containers: []
	W1209 05:56:28.592540 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:56:28.592546 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:56:28.592603 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:56:28.616082 1437114 cri.go:89] found id: ""
	I1209 05:56:28.616109 1437114 logs.go:282] 0 containers: []
	W1209 05:56:28.616118 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:56:28.616127 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:56:28.616140 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:56:28.641671 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:56:28.641702 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:56:28.667950 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:56:28.667976 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:56:28.723545 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:56:28.723579 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:56:28.739105 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:56:28.739133 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:56:28.799453 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:56:28.791383   12291 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:28.792152   12291 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:28.793337   12291 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:28.793895   12291 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:28.795399   12291 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:56:28.791383   12291 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:28.792152   12291 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:28.793337   12291 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:28.793895   12291 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:28.795399   12291 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:56:31.300174 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:56:31.310601 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:56:31.310671 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:56:31.335264 1437114 cri.go:89] found id: ""
	I1209 05:56:31.335286 1437114 logs.go:282] 0 containers: []
	W1209 05:56:31.335295 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:56:31.335301 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:56:31.335359 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:56:31.359354 1437114 cri.go:89] found id: ""
	I1209 05:56:31.359377 1437114 logs.go:282] 0 containers: []
	W1209 05:56:31.359386 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:56:31.359392 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:56:31.359451 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:56:31.385360 1437114 cri.go:89] found id: ""
	I1209 05:56:31.385383 1437114 logs.go:282] 0 containers: []
	W1209 05:56:31.385392 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:56:31.385398 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:56:31.385463 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:56:31.410224 1437114 cri.go:89] found id: ""
	I1209 05:56:31.410250 1437114 logs.go:282] 0 containers: []
	W1209 05:56:31.410258 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:56:31.410265 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:56:31.410359 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:56:31.451992 1437114 cri.go:89] found id: ""
	I1209 05:56:31.452040 1437114 logs.go:282] 0 containers: []
	W1209 05:56:31.452049 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:56:31.452056 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:56:31.452116 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:56:31.484950 1437114 cri.go:89] found id: ""
	I1209 05:56:31.484979 1437114 logs.go:282] 0 containers: []
	W1209 05:56:31.484987 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:56:31.484994 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:56:31.485052 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:56:31.518900 1437114 cri.go:89] found id: ""
	I1209 05:56:31.518929 1437114 logs.go:282] 0 containers: []
	W1209 05:56:31.518938 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:56:31.518944 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:56:31.519004 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:56:31.542368 1437114 cri.go:89] found id: ""
	I1209 05:56:31.542398 1437114 logs.go:282] 0 containers: []
	W1209 05:56:31.542406 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:56:31.542414 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:56:31.542426 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:56:31.597391 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:56:31.597426 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:56:31.613542 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:56:31.613568 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:56:31.675768 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:56:31.667793   12388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:31.668366   12388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:31.670085   12388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:31.670523   12388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:31.672049   12388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:56:31.667793   12388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:31.668366   12388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:31.670085   12388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:31.670523   12388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:31.672049   12388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:56:31.675790 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:56:31.675801 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:56:31.705823 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:56:31.705860 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:56:34.233697 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:56:34.244491 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:56:34.244562 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:56:34.269357 1437114 cri.go:89] found id: ""
	I1209 05:56:34.269382 1437114 logs.go:282] 0 containers: []
	W1209 05:56:34.269393 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:56:34.269399 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:56:34.269455 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:56:34.298358 1437114 cri.go:89] found id: ""
	I1209 05:56:34.298389 1437114 logs.go:282] 0 containers: []
	W1209 05:56:34.298398 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:56:34.298404 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:56:34.298463 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:56:34.323280 1437114 cri.go:89] found id: ""
	I1209 05:56:34.323301 1437114 logs.go:282] 0 containers: []
	W1209 05:56:34.323309 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:56:34.323315 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:56:34.323372 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:56:34.347068 1437114 cri.go:89] found id: ""
	I1209 05:56:34.347144 1437114 logs.go:282] 0 containers: []
	W1209 05:56:34.347166 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:56:34.347185 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:56:34.347268 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:56:34.370494 1437114 cri.go:89] found id: ""
	I1209 05:56:34.370519 1437114 logs.go:282] 0 containers: []
	W1209 05:56:34.370528 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:56:34.370534 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:56:34.370593 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:56:34.394561 1437114 cri.go:89] found id: ""
	I1209 05:56:34.394586 1437114 logs.go:282] 0 containers: []
	W1209 05:56:34.394594 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:56:34.394601 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:56:34.394665 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:56:34.418680 1437114 cri.go:89] found id: ""
	I1209 05:56:34.418708 1437114 logs.go:282] 0 containers: []
	W1209 05:56:34.418717 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:56:34.418723 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:56:34.418781 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:56:34.456783 1437114 cri.go:89] found id: ""
	I1209 05:56:34.456811 1437114 logs.go:282] 0 containers: []
	W1209 05:56:34.456819 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:56:34.456828 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:56:34.456839 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:56:34.520119 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:56:34.520160 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:56:34.536245 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:56:34.536271 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:56:34.598782 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:56:34.590200   12497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:34.590688   12497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:34.592324   12497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:34.592957   12497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:34.593923   12497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:56:34.590200   12497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:34.590688   12497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:34.592324   12497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:34.592957   12497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:34.593923   12497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:56:34.598802 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:56:34.598813 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:56:34.623426 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:56:34.623456 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:56:37.156294 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:56:37.167303 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:56:37.167376 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:56:37.213639 1437114 cri.go:89] found id: ""
	I1209 05:56:37.213661 1437114 logs.go:282] 0 containers: []
	W1209 05:56:37.213670 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:56:37.213676 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:56:37.213734 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:56:37.251381 1437114 cri.go:89] found id: ""
	I1209 05:56:37.251451 1437114 logs.go:282] 0 containers: []
	W1209 05:56:37.251472 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:56:37.251489 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:56:37.251577 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:56:37.276652 1437114 cri.go:89] found id: ""
	I1209 05:56:37.276683 1437114 logs.go:282] 0 containers: []
	W1209 05:56:37.276718 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:56:37.276730 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:56:37.276807 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:56:37.306291 1437114 cri.go:89] found id: ""
	I1209 05:56:37.306355 1437114 logs.go:282] 0 containers: []
	W1209 05:56:37.306378 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:56:37.306397 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:56:37.306480 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:56:37.330690 1437114 cri.go:89] found id: ""
	I1209 05:56:37.330761 1437114 logs.go:282] 0 containers: []
	W1209 05:56:37.330784 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:56:37.330803 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:56:37.330891 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:56:37.360974 1437114 cri.go:89] found id: ""
	I1209 05:56:37.360996 1437114 logs.go:282] 0 containers: []
	W1209 05:56:37.361005 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:56:37.361011 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:56:37.361067 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:56:37.385070 1437114 cri.go:89] found id: ""
	I1209 05:56:37.385134 1437114 logs.go:282] 0 containers: []
	W1209 05:56:37.385149 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:56:37.385157 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:56:37.385214 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:56:37.408838 1437114 cri.go:89] found id: ""
	I1209 05:56:37.408872 1437114 logs.go:282] 0 containers: []
	W1209 05:56:37.408881 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:56:37.408890 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:56:37.408904 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:56:37.470471 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:56:37.470552 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:56:37.490560 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:56:37.490636 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:56:37.566595 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:56:37.558458   12607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:37.559098   12607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:37.560793   12607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:37.561249   12607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:37.562761   12607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:56:37.558458   12607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:37.559098   12607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:37.560793   12607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:37.561249   12607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:37.562761   12607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:56:37.566616 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:56:37.566629 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:56:37.591926 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:56:37.591966 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:56:40.120818 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:56:40.132357 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:56:40.132434 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:56:40.159057 1437114 cri.go:89] found id: ""
	I1209 05:56:40.159127 1437114 logs.go:282] 0 containers: []
	W1209 05:56:40.159150 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:56:40.159172 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:56:40.159260 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:56:40.194739 1437114 cri.go:89] found id: ""
	I1209 05:56:40.194762 1437114 logs.go:282] 0 containers: []
	W1209 05:56:40.194770 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:56:40.194777 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:56:40.194842 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:56:40.229613 1437114 cri.go:89] found id: ""
	I1209 05:56:40.229642 1437114 logs.go:282] 0 containers: []
	W1209 05:56:40.229651 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:56:40.229657 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:56:40.229720 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:56:40.266599 1437114 cri.go:89] found id: ""
	I1209 05:56:40.266622 1437114 logs.go:282] 0 containers: []
	W1209 05:56:40.266631 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:56:40.266643 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:56:40.266705 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:56:40.293941 1437114 cri.go:89] found id: ""
	I1209 05:56:40.293964 1437114 logs.go:282] 0 containers: []
	W1209 05:56:40.293973 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:56:40.293979 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:56:40.294037 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:56:40.319374 1437114 cri.go:89] found id: ""
	I1209 05:56:40.319407 1437114 logs.go:282] 0 containers: []
	W1209 05:56:40.319416 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:56:40.319423 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:56:40.319497 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:56:40.344221 1437114 cri.go:89] found id: ""
	I1209 05:56:40.344254 1437114 logs.go:282] 0 containers: []
	W1209 05:56:40.344263 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:56:40.344268 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:56:40.344333 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:56:40.369033 1437114 cri.go:89] found id: ""
	I1209 05:56:40.369056 1437114 logs.go:282] 0 containers: []
	W1209 05:56:40.369066 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:56:40.369076 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:56:40.369088 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:56:40.398480 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:56:40.398506 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:56:40.454913 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:56:40.454992 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:56:40.471549 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:56:40.471617 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:56:40.537419 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:56:40.529111   12732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:40.529745   12732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:40.531419   12732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:40.532052   12732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:40.533493   12732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:56:40.529111   12732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:40.529745   12732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:40.531419   12732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:40.532052   12732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:40.533493   12732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:56:40.537440 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:56:40.537452 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:56:43.063560 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:56:43.074056 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:56:43.074128 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:56:43.098443 1437114 cri.go:89] found id: ""
	I1209 05:56:43.098467 1437114 logs.go:282] 0 containers: []
	W1209 05:56:43.098476 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:56:43.098483 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:56:43.098543 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:56:43.123378 1437114 cri.go:89] found id: ""
	I1209 05:56:43.123405 1437114 logs.go:282] 0 containers: []
	W1209 05:56:43.123414 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:56:43.123420 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:56:43.123483 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:56:43.152283 1437114 cri.go:89] found id: ""
	I1209 05:56:43.152313 1437114 logs.go:282] 0 containers: []
	W1209 05:56:43.152322 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:56:43.152329 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:56:43.152389 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:56:43.176720 1437114 cri.go:89] found id: ""
	I1209 05:56:43.176744 1437114 logs.go:282] 0 containers: []
	W1209 05:56:43.176752 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:56:43.176759 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:56:43.176816 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:56:43.202038 1437114 cri.go:89] found id: ""
	I1209 05:56:43.202066 1437114 logs.go:282] 0 containers: []
	W1209 05:56:43.202074 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:56:43.202081 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:56:43.202136 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:56:43.231595 1437114 cri.go:89] found id: ""
	I1209 05:56:43.231620 1437114 logs.go:282] 0 containers: []
	W1209 05:56:43.231629 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:56:43.231636 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:56:43.231693 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:56:43.261330 1437114 cri.go:89] found id: ""
	I1209 05:56:43.261351 1437114 logs.go:282] 0 containers: []
	W1209 05:56:43.261359 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:56:43.261365 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:56:43.261422 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:56:43.290154 1437114 cri.go:89] found id: ""
	I1209 05:56:43.290175 1437114 logs.go:282] 0 containers: []
	W1209 05:56:43.290183 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:56:43.290192 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:56:43.290204 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:56:43.318398 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:56:43.318424 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:56:43.377076 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:56:43.377112 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:56:43.392846 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:56:43.392877 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:56:43.468351 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:56:43.457690   12848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:43.458459   12848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:43.460248   12848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:43.460927   12848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:43.462463   12848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:56:43.457690   12848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:43.458459   12848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:43.460248   12848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:43.460927   12848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:43.462463   12848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:56:43.468373 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:56:43.468384 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:56:46.000301 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:56:46.013622 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:56:46.013695 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:56:46.043040 1437114 cri.go:89] found id: ""
	I1209 05:56:46.043066 1437114 logs.go:282] 0 containers: []
	W1209 05:56:46.043074 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:56:46.043081 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:56:46.043164 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:56:46.073486 1437114 cri.go:89] found id: ""
	I1209 05:56:46.073512 1437114 logs.go:282] 0 containers: []
	W1209 05:56:46.073521 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:56:46.073529 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:56:46.073593 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:56:46.099148 1437114 cri.go:89] found id: ""
	I1209 05:56:46.099175 1437114 logs.go:282] 0 containers: []
	W1209 05:56:46.099185 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:56:46.099193 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:56:46.099252 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:56:46.123167 1437114 cri.go:89] found id: ""
	I1209 05:56:46.123191 1437114 logs.go:282] 0 containers: []
	W1209 05:56:46.123200 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:56:46.123207 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:56:46.123271 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:56:46.151973 1437114 cri.go:89] found id: ""
	I1209 05:56:46.151999 1437114 logs.go:282] 0 containers: []
	W1209 05:56:46.152008 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:56:46.152035 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:56:46.152098 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:56:46.177766 1437114 cri.go:89] found id: ""
	I1209 05:56:46.177798 1437114 logs.go:282] 0 containers: []
	W1209 05:56:46.177807 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:56:46.177813 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:56:46.177871 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:56:46.206986 1437114 cri.go:89] found id: ""
	I1209 05:56:46.207008 1437114 logs.go:282] 0 containers: []
	W1209 05:56:46.207017 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:56:46.207023 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:56:46.207081 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:56:46.233946 1437114 cri.go:89] found id: ""
	I1209 05:56:46.233968 1437114 logs.go:282] 0 containers: []
	W1209 05:56:46.233977 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:56:46.233986 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:56:46.233997 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:56:46.298127 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:56:46.289387   12939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:46.289949   12939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:46.291474   12939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:46.292041   12939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:46.293829   12939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:56:46.289387   12939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:46.289949   12939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:46.291474   12939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:46.292041   12939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:46.293829   12939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:56:46.298150 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:56:46.298162 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:56:46.323208 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:56:46.323239 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:56:46.355077 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:56:46.355106 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:56:46.410415 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:56:46.410452 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:56:48.926721 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:56:48.937257 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:56:48.937332 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:56:48.961648 1437114 cri.go:89] found id: ""
	I1209 05:56:48.961676 1437114 logs.go:282] 0 containers: []
	W1209 05:56:48.961685 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:56:48.961698 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:56:48.961758 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:56:48.989144 1437114 cri.go:89] found id: ""
	I1209 05:56:48.989169 1437114 logs.go:282] 0 containers: []
	W1209 05:56:48.989178 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:56:48.989184 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:56:48.989240 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:56:49.014588 1437114 cri.go:89] found id: ""
	I1209 05:56:49.014613 1437114 logs.go:282] 0 containers: []
	W1209 05:56:49.014622 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:56:49.014628 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:56:49.014691 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:56:49.038311 1437114 cri.go:89] found id: ""
	I1209 05:56:49.038339 1437114 logs.go:282] 0 containers: []
	W1209 05:56:49.038349 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:56:49.038355 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:56:49.038414 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:56:49.062714 1437114 cri.go:89] found id: ""
	I1209 05:56:49.062740 1437114 logs.go:282] 0 containers: []
	W1209 05:56:49.062748 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:56:49.062754 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:56:49.062814 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:56:49.089769 1437114 cri.go:89] found id: ""
	I1209 05:56:49.089798 1437114 logs.go:282] 0 containers: []
	W1209 05:56:49.089807 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:56:49.089815 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:56:49.089892 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:56:49.118456 1437114 cri.go:89] found id: ""
	I1209 05:56:49.118477 1437114 logs.go:282] 0 containers: []
	W1209 05:56:49.118486 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:56:49.118492 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:56:49.118548 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:56:49.146213 1437114 cri.go:89] found id: ""
	I1209 05:56:49.146241 1437114 logs.go:282] 0 containers: []
	W1209 05:56:49.146260 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:56:49.146286 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:56:49.146304 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:56:49.171755 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:56:49.171792 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:56:49.210632 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:56:49.210700 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:56:49.274853 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:56:49.274890 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:56:49.290746 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:56:49.290774 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:56:49.352595 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:56:49.344509   13068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:49.345192   13068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:49.346929   13068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:49.347389   13068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:49.348821   13068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:56:49.344509   13068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:49.345192   13068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:49.346929   13068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:49.347389   13068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:49.348821   13068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:56:51.854276 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:56:51.864787 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:56:51.864868 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:56:51.888399 1437114 cri.go:89] found id: ""
	I1209 05:56:51.888422 1437114 logs.go:282] 0 containers: []
	W1209 05:56:51.888431 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:56:51.888437 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:56:51.888499 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:56:51.913838 1437114 cri.go:89] found id: ""
	I1209 05:56:51.913865 1437114 logs.go:282] 0 containers: []
	W1209 05:56:51.913873 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:56:51.913880 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:56:51.913961 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:56:51.938727 1437114 cri.go:89] found id: ""
	I1209 05:56:51.938768 1437114 logs.go:282] 0 containers: []
	W1209 05:56:51.938794 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:56:51.938811 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:56:51.938885 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:56:51.964549 1437114 cri.go:89] found id: ""
	I1209 05:56:51.964576 1437114 logs.go:282] 0 containers: []
	W1209 05:56:51.964584 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:56:51.964590 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:56:51.964689 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:56:51.988777 1437114 cri.go:89] found id: ""
	I1209 05:56:51.988806 1437114 logs.go:282] 0 containers: []
	W1209 05:56:51.988815 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:56:51.988821 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:56:51.988908 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:56:52.017110 1437114 cri.go:89] found id: ""
	I1209 05:56:52.017138 1437114 logs.go:282] 0 containers: []
	W1209 05:56:52.017147 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:56:52.017154 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:56:52.017219 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:56:52.043184 1437114 cri.go:89] found id: ""
	I1209 05:56:52.043211 1437114 logs.go:282] 0 containers: []
	W1209 05:56:52.043219 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:56:52.043225 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:56:52.043293 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:56:52.068591 1437114 cri.go:89] found id: ""
	I1209 05:56:52.068617 1437114 logs.go:282] 0 containers: []
	W1209 05:56:52.068626 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:56:52.068636 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:56:52.068652 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:56:52.135805 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:56:52.127242   13157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:52.127996   13157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:52.129698   13157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:52.130086   13157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:52.131645   13157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:56:52.127242   13157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:52.127996   13157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:52.129698   13157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:52.130086   13157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:52.131645   13157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:56:52.135824 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:56:52.135837 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:56:52.160848 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:56:52.160884 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:56:52.206902 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:56:52.206930 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:56:52.269206 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:56:52.269242 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:56:54.786534 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:56:54.796870 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:56:54.796942 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:56:54.820891 1437114 cri.go:89] found id: ""
	I1209 05:56:54.820912 1437114 logs.go:282] 0 containers: []
	W1209 05:56:54.820920 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:56:54.820926 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:56:54.820983 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:56:54.844219 1437114 cri.go:89] found id: ""
	I1209 05:56:54.844243 1437114 logs.go:282] 0 containers: []
	W1209 05:56:54.844251 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:56:54.844257 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:56:54.844314 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:56:54.867467 1437114 cri.go:89] found id: ""
	I1209 05:56:54.867540 1437114 logs.go:282] 0 containers: []
	W1209 05:56:54.867564 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:56:54.867585 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:56:54.867678 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:56:54.891985 1437114 cri.go:89] found id: ""
	I1209 05:56:54.892007 1437114 logs.go:282] 0 containers: []
	W1209 05:56:54.892053 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:56:54.892060 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:56:54.892135 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:56:54.915079 1437114 cri.go:89] found id: ""
	I1209 05:56:54.915104 1437114 logs.go:282] 0 containers: []
	W1209 05:56:54.915112 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:56:54.915119 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:56:54.915175 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:56:54.941729 1437114 cri.go:89] found id: ""
	I1209 05:56:54.941768 1437114 logs.go:282] 0 containers: []
	W1209 05:56:54.941776 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:56:54.941783 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:56:54.941840 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:56:54.970033 1437114 cri.go:89] found id: ""
	I1209 05:56:54.970058 1437114 logs.go:282] 0 containers: []
	W1209 05:56:54.970066 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:56:54.970072 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:56:54.970134 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:56:55.004188 1437114 cri.go:89] found id: ""
	I1209 05:56:55.004230 1437114 logs.go:282] 0 containers: []
	W1209 05:56:55.004240 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:56:55.004250 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:56:55.004264 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:56:55.034996 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:56:55.035025 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:56:55.091574 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:56:55.091610 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:56:55.108302 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:56:55.108331 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:56:55.172944 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:56:55.163616   13287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:55.164399   13287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:55.166155   13287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:55.166466   13287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:55.168546   13287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:56:55.163616   13287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:55.164399   13287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:55.166155   13287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:55.166466   13287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:55.168546   13287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:56:55.172964 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:56:55.172985 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:56:57.700005 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:56:57.714279 1437114 out.go:203] 
	W1209 05:56:57.717113 1437114 out.go:285] X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1209 05:56:57.717154 1437114 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1209 05:56:57.717169 1437114 out.go:285] * Related issues:
	W1209 05:56:57.717186 1437114 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	W1209 05:56:57.717204 1437114 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	I1209 05:56:57.720208 1437114 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 09 05:50:54 newest-cni-262540 containerd[557]: time="2025-12-09T05:50:54.140457949Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 09 05:50:54 newest-cni-262540 containerd[557]: time="2025-12-09T05:50:54.140531030Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 09 05:50:54 newest-cni-262540 containerd[557]: time="2025-12-09T05:50:54.140633493Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 09 05:50:54 newest-cni-262540 containerd[557]: time="2025-12-09T05:50:54.140719603Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 09 05:50:54 newest-cni-262540 containerd[557]: time="2025-12-09T05:50:54.140780722Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 09 05:50:54 newest-cni-262540 containerd[557]: time="2025-12-09T05:50:54.140839280Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 09 05:50:54 newest-cni-262540 containerd[557]: time="2025-12-09T05:50:54.140899447Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 09 05:50:54 newest-cni-262540 containerd[557]: time="2025-12-09T05:50:54.140960081Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 09 05:50:54 newest-cni-262540 containerd[557]: time="2025-12-09T05:50:54.141027665Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 09 05:50:54 newest-cni-262540 containerd[557]: time="2025-12-09T05:50:54.141111133Z" level=info msg="Connect containerd service"
	Dec 09 05:50:54 newest-cni-262540 containerd[557]: time="2025-12-09T05:50:54.141485580Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 09 05:50:54 newest-cni-262540 containerd[557]: time="2025-12-09T05:50:54.142145599Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 09 05:50:54 newest-cni-262540 containerd[557]: time="2025-12-09T05:50:54.154449407Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 09 05:50:54 newest-cni-262540 containerd[557]: time="2025-12-09T05:50:54.154518566Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 09 05:50:54 newest-cni-262540 containerd[557]: time="2025-12-09T05:50:54.154573474Z" level=info msg="Start subscribing containerd event"
	Dec 09 05:50:54 newest-cni-262540 containerd[557]: time="2025-12-09T05:50:54.154621735Z" level=info msg="Start recovering state"
	Dec 09 05:50:54 newest-cni-262540 containerd[557]: time="2025-12-09T05:50:54.192831791Z" level=info msg="Start event monitor"
	Dec 09 05:50:54 newest-cni-262540 containerd[557]: time="2025-12-09T05:50:54.193022399Z" level=info msg="Start cni network conf syncer for default"
	Dec 09 05:50:54 newest-cni-262540 containerd[557]: time="2025-12-09T05:50:54.193095554Z" level=info msg="Start streaming server"
	Dec 09 05:50:54 newest-cni-262540 containerd[557]: time="2025-12-09T05:50:54.193158043Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 09 05:50:54 newest-cni-262540 containerd[557]: time="2025-12-09T05:50:54.193246959Z" level=info msg="runtime interface starting up..."
	Dec 09 05:50:54 newest-cni-262540 containerd[557]: time="2025-12-09T05:50:54.193315946Z" level=info msg="starting plugins..."
	Dec 09 05:50:54 newest-cni-262540 containerd[557]: time="2025-12-09T05:50:54.193399907Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 09 05:50:54 newest-cni-262540 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 09 05:50:54 newest-cni-262540 containerd[557]: time="2025-12-09T05:50:54.195297741Z" level=info msg="containerd successfully booted in 0.080443s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:57:00.994060   13439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:57:00.994444   13439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:57:00.995963   13439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:57:00.996555   13439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:57:00.998081   13439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +25.904254] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:14] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:16] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:18] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:19] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:30] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:32] overlayfs: idmapped layers are currently not supported
	[ +28.114653] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:33] overlayfs: idmapped layers are currently not supported
	[ +23.720849] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:34] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:35] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:36] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:37] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:38] overlayfs: idmapped layers are currently not supported
	[ +23.656275] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:39] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:57] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:58] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:00] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:02] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:03] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:05] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:06] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 9 05:31] overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	
	
	==> kernel <==
	 05:57:01 up  8:39,  0 user,  load average: 0.88, 0.70, 1.09
	Linux newest-cni-262540 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 09 05:56:57 newest-cni-262540 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 09 05:56:58 newest-cni-262540 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 484.
	Dec 09 05:56:58 newest-cni-262540 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 05:56:58 newest-cni-262540 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 05:56:58 newest-cni-262540 kubelet[13316]: E1209 05:56:58.251635   13316 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 09 05:56:58 newest-cni-262540 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 09 05:56:58 newest-cni-262540 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 09 05:56:58 newest-cni-262540 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 485.
	Dec 09 05:56:58 newest-cni-262540 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 05:56:58 newest-cni-262540 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 05:56:59 newest-cni-262540 kubelet[13321]: E1209 05:56:59.018705   13321 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 09 05:56:59 newest-cni-262540 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 09 05:56:59 newest-cni-262540 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 09 05:56:59 newest-cni-262540 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 486.
	Dec 09 05:56:59 newest-cni-262540 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 05:56:59 newest-cni-262540 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 05:56:59 newest-cni-262540 kubelet[13341]: E1209 05:56:59.760093   13341 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 09 05:56:59 newest-cni-262540 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 09 05:56:59 newest-cni-262540 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 09 05:57:00 newest-cni-262540 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 487.
	Dec 09 05:57:00 newest-cni-262540 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 05:57:00 newest-cni-262540 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 05:57:00 newest-cni-262540 kubelet[13346]: E1209 05:57:00.503671   13346 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 09 05:57:00 newest-cni-262540 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 09 05:57:00 newest-cni-262540 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-262540 -n newest-cni-262540
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-262540 -n newest-cni-262540: exit status 2 (334.921088ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "newest-cni-262540" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (374.48s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (542.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1209 05:52:38.985261 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-717497/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1209 05:53:26.932768 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/old-k8s-version-384009/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1209 05:54:09.296721 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/default-k8s-diff-port-564611/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1209 05:54:49.996252 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/old-k8s-version-384009/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1209 05:55:32.751560 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1209 05:55:49.823013 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/addons-221952/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1209 05:56:06.732130 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/addons-221952/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1209 05:56:55.819144 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1209 05:57:38.985424 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-717497/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1209 05:58:26.932779 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/old-k8s-version-384009/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
I1209 05:58:35.489696 1144231 config.go:182] Loaded profile config "auto-132757": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1209 05:59:09.296688 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/default-k8s-diff-port-564611/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
I1209 06:00:01.003387 1144231 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
start_stop_delete_test.go:272: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:272: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-842269 -n no-preload-842269
start_stop_delete_test.go:272: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-842269 -n no-preload-842269: exit status 2 (383.661275ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:272: status error: exit status 2 (may be ok)
start_stop_delete_test.go:272: "no-preload-842269" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-842269
helpers_test.go:243: (dbg) docker inspect no-preload-842269:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9789b34a5453b154c4ceca4f0038c1d7948d7f0f72f334a114a8452d803ad415",
	        "Created": "2025-12-09T05:35:10.617601088Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1429985,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-09T05:45:19.572205739Z",
	            "FinishedAt": "2025-12-09T05:45:18.233836564Z"
	        },
	        "Image": "sha256:e4eb91ed18a24161fce60c7cdd660144ecd5b8c5029dc2dea2c5e423c2f48ce4",
	        "ResolvConfPath": "/var/lib/docker/containers/9789b34a5453b154c4ceca4f0038c1d7948d7f0f72f334a114a8452d803ad415/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9789b34a5453b154c4ceca4f0038c1d7948d7f0f72f334a114a8452d803ad415/hostname",
	        "HostsPath": "/var/lib/docker/containers/9789b34a5453b154c4ceca4f0038c1d7948d7f0f72f334a114a8452d803ad415/hosts",
	        "LogPath": "/var/lib/docker/containers/9789b34a5453b154c4ceca4f0038c1d7948d7f0f72f334a114a8452d803ad415/9789b34a5453b154c4ceca4f0038c1d7948d7f0f72f334a114a8452d803ad415-json.log",
	        "Name": "/no-preload-842269",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-842269:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-842269",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "9789b34a5453b154c4ceca4f0038c1d7948d7f0f72f334a114a8452d803ad415",
	                "LowerDir": "/var/lib/docker/overlay2/658f95b1f330fcb0d4774e9611c50218d409d0a889583e0050325b4fe479e9f7-init/diff:/var/lib/docker/overlay2/c44bb57aa59cc265266f37f2bb6e7ec0e7d641c3b4aeaa57e6d23deec6f0d1d4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/658f95b1f330fcb0d4774e9611c50218d409d0a889583e0050325b4fe479e9f7/merged",
	                "UpperDir": "/var/lib/docker/overlay2/658f95b1f330fcb0d4774e9611c50218d409d0a889583e0050325b4fe479e9f7/diff",
	                "WorkDir": "/var/lib/docker/overlay2/658f95b1f330fcb0d4774e9611c50218d409d0a889583e0050325b4fe479e9f7/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-842269",
	                "Source": "/var/lib/docker/volumes/no-preload-842269/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-842269",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-842269",
	                "name.minikube.sigs.k8s.io": "no-preload-842269",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7fcd619b0c6697c145e92186b02d3f8b52fc0617bc693eecdb3992bd01dd5379",
	            "SandboxKey": "/var/run/docker/netns/7fcd619b0c66",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34210"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34211"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34214"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34212"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34213"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-842269": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "6e:db:fc:0d:87:5a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6461bd7226e5723487f325bf78054dc63f1dafa2831abe7b44a8cc288dfa4456",
	                    "EndpointID": "26ea729d3df39a6ce095a6c0877cc7989e68004132accb6fb25a8d1686357af6",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-842269",
	                        "9789b34a5453"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-842269 -n no-preload-842269
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-842269 -n no-preload-842269: exit status 2 (371.274418ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-842269 logs -n 25
helpers_test.go:260: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────┬────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                      ARGS                                       │    PROFILE     │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────┼────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p kindnet-132757 sudo iptables -t nat -L -n -v                                 │ kindnet-132757 │ jenkins │ v1.37.0 │ 09 Dec 25 06:00 UTC │ 09 Dec 25 06:00 UTC │
	│ ssh     │ -p kindnet-132757 sudo systemctl status kubelet --all --full --no-pager         │ kindnet-132757 │ jenkins │ v1.37.0 │ 09 Dec 25 06:00 UTC │ 09 Dec 25 06:00 UTC │
	│ ssh     │ -p kindnet-132757 sudo systemctl cat kubelet --no-pager                         │ kindnet-132757 │ jenkins │ v1.37.0 │ 09 Dec 25 06:00 UTC │ 09 Dec 25 06:00 UTC │
	│ ssh     │ -p kindnet-132757 sudo journalctl -xeu kubelet --all --full --no-pager          │ kindnet-132757 │ jenkins │ v1.37.0 │ 09 Dec 25 06:00 UTC │ 09 Dec 25 06:00 UTC │
	│ ssh     │ -p kindnet-132757 sudo cat /etc/kubernetes/kubelet.conf                         │ kindnet-132757 │ jenkins │ v1.37.0 │ 09 Dec 25 06:00 UTC │ 09 Dec 25 06:00 UTC │
	│ ssh     │ -p kindnet-132757 sudo cat /var/lib/kubelet/config.yaml                         │ kindnet-132757 │ jenkins │ v1.37.0 │ 09 Dec 25 06:00 UTC │ 09 Dec 25 06:00 UTC │
	│ ssh     │ -p kindnet-132757 sudo systemctl status docker --all --full --no-pager          │ kindnet-132757 │ jenkins │ v1.37.0 │ 09 Dec 25 06:00 UTC │                     │
	│ ssh     │ -p kindnet-132757 sudo systemctl cat docker --no-pager                          │ kindnet-132757 │ jenkins │ v1.37.0 │ 09 Dec 25 06:00 UTC │ 09 Dec 25 06:00 UTC │
	│ ssh     │ -p kindnet-132757 sudo cat /etc/docker/daemon.json                              │ kindnet-132757 │ jenkins │ v1.37.0 │ 09 Dec 25 06:00 UTC │                     │
	│ ssh     │ -p kindnet-132757 sudo docker system info                                       │ kindnet-132757 │ jenkins │ v1.37.0 │ 09 Dec 25 06:00 UTC │                     │
	│ ssh     │ -p kindnet-132757 sudo systemctl status cri-docker --all --full --no-pager      │ kindnet-132757 │ jenkins │ v1.37.0 │ 09 Dec 25 06:00 UTC │                     │
	│ ssh     │ -p kindnet-132757 sudo systemctl cat cri-docker --no-pager                      │ kindnet-132757 │ jenkins │ v1.37.0 │ 09 Dec 25 06:00 UTC │ 09 Dec 25 06:00 UTC │
	│ ssh     │ -p kindnet-132757 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf │ kindnet-132757 │ jenkins │ v1.37.0 │ 09 Dec 25 06:00 UTC │                     │
	│ ssh     │ -p kindnet-132757 sudo cat /usr/lib/systemd/system/cri-docker.service           │ kindnet-132757 │ jenkins │ v1.37.0 │ 09 Dec 25 06:00 UTC │ 09 Dec 25 06:00 UTC │
	│ ssh     │ -p kindnet-132757 sudo cri-dockerd --version                                    │ kindnet-132757 │ jenkins │ v1.37.0 │ 09 Dec 25 06:00 UTC │ 09 Dec 25 06:00 UTC │
	│ ssh     │ -p kindnet-132757 sudo systemctl status containerd --all --full --no-pager      │ kindnet-132757 │ jenkins │ v1.37.0 │ 09 Dec 25 06:00 UTC │ 09 Dec 25 06:00 UTC │
	│ ssh     │ -p kindnet-132757 sudo systemctl cat containerd --no-pager                      │ kindnet-132757 │ jenkins │ v1.37.0 │ 09 Dec 25 06:00 UTC │ 09 Dec 25 06:00 UTC │
	│ ssh     │ -p kindnet-132757 sudo cat /lib/systemd/system/containerd.service               │ kindnet-132757 │ jenkins │ v1.37.0 │ 09 Dec 25 06:00 UTC │ 09 Dec 25 06:00 UTC │
	│ ssh     │ -p kindnet-132757 sudo cat /etc/containerd/config.toml                          │ kindnet-132757 │ jenkins │ v1.37.0 │ 09 Dec 25 06:00 UTC │ 09 Dec 25 06:00 UTC │
	│ ssh     │ -p kindnet-132757 sudo containerd config dump                                   │ kindnet-132757 │ jenkins │ v1.37.0 │ 09 Dec 25 06:00 UTC │ 09 Dec 25 06:00 UTC │
	│ ssh     │ -p kindnet-132757 sudo systemctl status crio --all --full --no-pager            │ kindnet-132757 │ jenkins │ v1.37.0 │ 09 Dec 25 06:00 UTC │                     │
	│ ssh     │ -p kindnet-132757 sudo systemctl cat crio --no-pager                            │ kindnet-132757 │ jenkins │ v1.37.0 │ 09 Dec 25 06:00 UTC │ 09 Dec 25 06:00 UTC │
	│ ssh     │ -p kindnet-132757 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;  │ kindnet-132757 │ jenkins │ v1.37.0 │ 09 Dec 25 06:00 UTC │ 09 Dec 25 06:00 UTC │
	│ ssh     │ -p kindnet-132757 sudo crio config                                              │ kindnet-132757 │ jenkins │ v1.37.0 │ 09 Dec 25 06:00 UTC │ 09 Dec 25 06:00 UTC │
	│ delete  │ -p kindnet-132757                                                               │ kindnet-132757 │ jenkins │ v1.37.0 │ 09 Dec 25 06:00 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────┴────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/09 05:59:06
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 05:59:06.030485 1462459 out.go:360] Setting OutFile to fd 1 ...
	I1209 05:59:06.030602 1462459 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 05:59:06.030612 1462459 out.go:374] Setting ErrFile to fd 2...
	I1209 05:59:06.030618 1462459 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 05:59:06.030876 1462459 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-1142328/.minikube/bin
	I1209 05:59:06.031342 1462459 out.go:368] Setting JSON to false
	I1209 05:59:06.032289 1462459 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":31269,"bootTime":1765228677,"procs":165,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1209 05:59:06.032364 1462459 start.go:143] virtualization:  
	I1209 05:59:06.036055 1462459 out.go:179] * [kindnet-132757] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1209 05:59:06.040845 1462459 out.go:179]   - MINIKUBE_LOCATION=22081
	I1209 05:59:06.041036 1462459 notify.go:221] Checking for updates...
	I1209 05:59:06.047940 1462459 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 05:59:06.051420 1462459 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22081-1142328/kubeconfig
	I1209 05:59:06.054650 1462459 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-1142328/.minikube
	I1209 05:59:06.057970 1462459 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1209 05:59:06.061144 1462459 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 05:59:06.064807 1462459 config.go:182] Loaded profile config "no-preload-842269": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1209 05:59:06.064914 1462459 driver.go:422] Setting default libvirt URI to qemu:///system
	I1209 05:59:06.100675 1462459 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1209 05:59:06.100791 1462459 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 05:59:06.154683 1462459 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-09 05:59:06.145434481 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1209 05:59:06.154789 1462459 docker.go:319] overlay module found
	I1209 05:59:06.158068 1462459 out.go:179] * Using the docker driver based on user configuration
	I1209 05:59:06.161008 1462459 start.go:309] selected driver: docker
	I1209 05:59:06.161030 1462459 start.go:927] validating driver "docker" against <nil>
	I1209 05:59:06.161044 1462459 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 05:59:06.161819 1462459 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 05:59:06.215475 1462459 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-09 05:59:06.206619267 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1209 05:59:06.215639 1462459 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1209 05:59:06.215860 1462459 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 05:59:06.218930 1462459 out.go:179] * Using Docker driver with root privileges
	I1209 05:59:06.221872 1462459 cni.go:84] Creating CNI manager for "kindnet"
	I1209 05:59:06.221894 1462459 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1209 05:59:06.221972 1462459 start.go:353] cluster config:
	{Name:kindnet-132757 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:kindnet-132757 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSoc
k: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 05:59:06.225151 1462459 out.go:179] * Starting "kindnet-132757" primary control-plane node in "kindnet-132757" cluster
	I1209 05:59:06.227885 1462459 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1209 05:59:06.230778 1462459 out.go:179] * Pulling base image v0.0.48-1765184860-22066 ...
	I1209 05:59:06.233590 1462459 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1209 05:59:06.233634 1462459 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4
	I1209 05:59:06.233646 1462459 cache.go:65] Caching tarball of preloaded images
	I1209 05:59:06.233663 1462459 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local docker daemon
	I1209 05:59:06.233734 1462459 preload.go:238] Found /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1209 05:59:06.233744 1462459 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on containerd
	I1209 05:59:06.233849 1462459 profile.go:143] Saving config to /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/kindnet-132757/config.json ...
	I1209 05:59:06.233868 1462459 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/kindnet-132757/config.json: {Name:mk9bf14297404d10737724e074d5cfe578df47b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 05:59:06.252341 1462459 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local docker daemon, skipping pull
	I1209 05:59:06.252371 1462459 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c exists in daemon, skipping load
	I1209 05:59:06.252390 1462459 cache.go:243] Successfully downloaded all kic artifacts
	I1209 05:59:06.252423 1462459 start.go:360] acquireMachinesLock for kindnet-132757: {Name:mkced5e39d9058eae91cc83750fd1d978d1f1c61 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 05:59:06.252535 1462459 start.go:364] duration metric: took 90.385µs to acquireMachinesLock for "kindnet-132757"
	I1209 05:59:06.252571 1462459 start.go:93] Provisioning new machine with config: &{Name:kindnet-132757 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:kindnet-132757 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cu
stomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1209 05:59:06.252653 1462459 start.go:125] createHost starting for "" (driver="docker")
	I1209 05:59:06.256078 1462459 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1209 05:59:06.256321 1462459 start.go:159] libmachine.API.Create for "kindnet-132757" (driver="docker")
	I1209 05:59:06.256358 1462459 client.go:173] LocalClient.Create starting
	I1209 05:59:06.256430 1462459 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem
	I1209 05:59:06.256469 1462459 main.go:143] libmachine: Decoding PEM data...
	I1209 05:59:06.256489 1462459 main.go:143] libmachine: Parsing certificate...
	I1209 05:59:06.256553 1462459 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/cert.pem
	I1209 05:59:06.256575 1462459 main.go:143] libmachine: Decoding PEM data...
	I1209 05:59:06.256591 1462459 main.go:143] libmachine: Parsing certificate...
	I1209 05:59:06.256947 1462459 cli_runner.go:164] Run: docker network inspect kindnet-132757 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1209 05:59:06.273126 1462459 cli_runner.go:211] docker network inspect kindnet-132757 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1209 05:59:06.273226 1462459 network_create.go:284] running [docker network inspect kindnet-132757] to gather additional debugging logs...
	I1209 05:59:06.273249 1462459 cli_runner.go:164] Run: docker network inspect kindnet-132757
	W1209 05:59:06.289001 1462459 cli_runner.go:211] docker network inspect kindnet-132757 returned with exit code 1
	I1209 05:59:06.289064 1462459 network_create.go:287] error running [docker network inspect kindnet-132757]: docker network inspect kindnet-132757: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network kindnet-132757 not found
	I1209 05:59:06.289080 1462459 network_create.go:289] output of [docker network inspect kindnet-132757]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network kindnet-132757 not found
	
	** /stderr **
	I1209 05:59:06.289185 1462459 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1209 05:59:06.306766 1462459 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-7a15eec16b1a IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:8a:b7:58:bc:12:6c} reservation:<nil>}
	I1209 05:59:06.307128 1462459 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-fcb9e6b38e8e IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:56:c3:7a:b4:06:4b} reservation:<nil>}
	I1209 05:59:06.307411 1462459 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-8c1346c67d6b IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:82:10:14:75:55:fb} reservation:<nil>}
	I1209 05:59:06.307844 1462459 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a361c0}
	I1209 05:59:06.307869 1462459 network_create.go:124] attempt to create docker network kindnet-132757 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1209 05:59:06.307924 1462459 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kindnet-132757 kindnet-132757
	I1209 05:59:06.363306 1462459 network_create.go:108] docker network kindnet-132757 192.168.76.0/24 created
	I1209 05:59:06.363342 1462459 kic.go:121] calculated static IP "192.168.76.2" for the "kindnet-132757" container
	I1209 05:59:06.363416 1462459 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1209 05:59:06.379621 1462459 cli_runner.go:164] Run: docker volume create kindnet-132757 --label name.minikube.sigs.k8s.io=kindnet-132757 --label created_by.minikube.sigs.k8s.io=true
	I1209 05:59:06.396332 1462459 oci.go:103] Successfully created a docker volume kindnet-132757
	I1209 05:59:06.396434 1462459 cli_runner.go:164] Run: docker run --rm --name kindnet-132757-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-132757 --entrypoint /usr/bin/test -v kindnet-132757:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c -d /var/lib
	I1209 05:59:06.909200 1462459 oci.go:107] Successfully prepared a docker volume kindnet-132757
	I1209 05:59:06.909280 1462459 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1209 05:59:06.909295 1462459 kic.go:194] Starting extracting preloaded images to volume ...
	I1209 05:59:06.909381 1462459 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v kindnet-132757:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c -I lz4 -xf /preloaded.tar -C /extractDir
	I1209 05:59:10.920812 1462459 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v kindnet-132757:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c -I lz4 -xf /preloaded.tar -C /extractDir: (4.011391381s)
	I1209 05:59:10.920843 1462459 kic.go:203] duration metric: took 4.011544328s to extract preloaded images to volume ...
	W1209 05:59:10.921007 1462459 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1209 05:59:10.921125 1462459 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1209 05:59:10.981889 1462459 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kindnet-132757 --name kindnet-132757 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-132757 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kindnet-132757 --network kindnet-132757 --ip 192.168.76.2 --volume kindnet-132757:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c
	I1209 05:59:11.311771 1462459 cli_runner.go:164] Run: docker container inspect kindnet-132757 --format={{.State.Running}}
	I1209 05:59:11.335850 1462459 cli_runner.go:164] Run: docker container inspect kindnet-132757 --format={{.State.Status}}
	I1209 05:59:11.354368 1462459 cli_runner.go:164] Run: docker exec kindnet-132757 stat /var/lib/dpkg/alternatives/iptables
	I1209 05:59:11.402550 1462459 oci.go:144] the created container "kindnet-132757" has a running status.
	I1209 05:59:11.402580 1462459 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22081-1142328/.minikube/machines/kindnet-132757/id_rsa...
	I1209 05:59:11.764947 1462459 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22081-1142328/.minikube/machines/kindnet-132757/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1209 05:59:11.805043 1462459 cli_runner.go:164] Run: docker container inspect kindnet-132757 --format={{.State.Status}}
	I1209 05:59:11.830007 1462459 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1209 05:59:11.830025 1462459 kic_runner.go:114] Args: [docker exec --privileged kindnet-132757 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1209 05:59:11.870855 1462459 cli_runner.go:164] Run: docker container inspect kindnet-132757 --format={{.State.Status}}
	I1209 05:59:11.889475 1462459 machine.go:94] provisionDockerMachine start ...
	I1209 05:59:11.889576 1462459 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-132757
	I1209 05:59:11.906321 1462459 main.go:143] libmachine: Using SSH client type: native
	I1209 05:59:11.906690 1462459 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 34225 <nil> <nil>}
	I1209 05:59:11.906700 1462459 main.go:143] libmachine: About to run SSH command:
	hostname
	I1209 05:59:11.907333 1462459 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:56970->127.0.0.1:34225: read: connection reset by peer
	I1209 05:59:15.067537 1462459 main.go:143] libmachine: SSH cmd err, output: <nil>: kindnet-132757
	
	I1209 05:59:15.067559 1462459 ubuntu.go:182] provisioning hostname "kindnet-132757"
	I1209 05:59:15.067628 1462459 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-132757
	I1209 05:59:15.084688 1462459 main.go:143] libmachine: Using SSH client type: native
	I1209 05:59:15.085011 1462459 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 34225 <nil> <nil>}
	I1209 05:59:15.085026 1462459 main.go:143] libmachine: About to run SSH command:
	sudo hostname kindnet-132757 && echo "kindnet-132757" | sudo tee /etc/hostname
	I1209 05:59:15.244629 1462459 main.go:143] libmachine: SSH cmd err, output: <nil>: kindnet-132757
	
	I1209 05:59:15.244765 1462459 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-132757
	I1209 05:59:15.262098 1462459 main.go:143] libmachine: Using SSH client type: native
	I1209 05:59:15.262413 1462459 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 34225 <nil> <nil>}
	I1209 05:59:15.262442 1462459 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skindnet-132757' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-132757/g' /etc/hosts;
				else 
					echo '127.0.1.1 kindnet-132757' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 05:59:15.412027 1462459 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1209 05:59:15.412051 1462459 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22081-1142328/.minikube CaCertPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22081-1142328/.minikube}
	I1209 05:59:15.412084 1462459 ubuntu.go:190] setting up certificates
	I1209 05:59:15.412101 1462459 provision.go:84] configureAuth start
	I1209 05:59:15.412165 1462459 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-132757
	I1209 05:59:15.428544 1462459 provision.go:143] copyHostCerts
	I1209 05:59:15.428619 1462459 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.pem, removing ...
	I1209 05:59:15.428634 1462459 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.pem
	I1209 05:59:15.428723 1462459 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.pem (1078 bytes)
	I1209 05:59:15.428816 1462459 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-1142328/.minikube/cert.pem, removing ...
	I1209 05:59:15.428827 1462459 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-1142328/.minikube/cert.pem
	I1209 05:59:15.428853 1462459 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22081-1142328/.minikube/cert.pem (1123 bytes)
	I1209 05:59:15.428911 1462459 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-1142328/.minikube/key.pem, removing ...
	I1209 05:59:15.428920 1462459 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-1142328/.minikube/key.pem
	I1209 05:59:15.428946 1462459 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22081-1142328/.minikube/key.pem (1675 bytes)
	I1209 05:59:15.429003 1462459 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca-key.pem org=jenkins.kindnet-132757 san=[127.0.0.1 192.168.76.2 kindnet-132757 localhost minikube]
	I1209 05:59:15.834872 1462459 provision.go:177] copyRemoteCerts
	I1209 05:59:15.834961 1462459 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 05:59:15.835008 1462459 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-132757
	I1209 05:59:15.855810 1462459 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34225 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/kindnet-132757/id_rsa Username:docker}
	I1209 05:59:15.959749 1462459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1209 05:59:15.978281 1462459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I1209 05:59:15.995104 1462459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1209 05:59:16.014968 1462459 provision.go:87] duration metric: took 602.847091ms to configureAuth
	I1209 05:59:16.014997 1462459 ubuntu.go:206] setting minikube options for container-runtime
	I1209 05:59:16.015189 1462459 config.go:182] Loaded profile config "kindnet-132757": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1209 05:59:16.015201 1462459 machine.go:97] duration metric: took 4.125707277s to provisionDockerMachine
	I1209 05:59:16.015209 1462459 client.go:176] duration metric: took 9.758839879s to LocalClient.Create
	I1209 05:59:16.015228 1462459 start.go:167] duration metric: took 9.75890871s to libmachine.API.Create "kindnet-132757"
	I1209 05:59:16.015239 1462459 start.go:293] postStartSetup for "kindnet-132757" (driver="docker")
	I1209 05:59:16.015248 1462459 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 05:59:16.015304 1462459 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 05:59:16.015347 1462459 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-132757
	I1209 05:59:16.033066 1462459 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34225 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/kindnet-132757/id_rsa Username:docker}
	I1209 05:59:16.140891 1462459 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 05:59:16.144208 1462459 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1209 05:59:16.144239 1462459 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1209 05:59:16.144251 1462459 filesync.go:126] Scanning /home/jenkins/minikube-integration/22081-1142328/.minikube/addons for local assets ...
	I1209 05:59:16.144312 1462459 filesync.go:126] Scanning /home/jenkins/minikube-integration/22081-1142328/.minikube/files for local assets ...
	I1209 05:59:16.144400 1462459 filesync.go:149] local asset: /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/ssl/certs/11442312.pem -> 11442312.pem in /etc/ssl/certs
	I1209 05:59:16.144503 1462459 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 05:59:16.151864 1462459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/ssl/certs/11442312.pem --> /etc/ssl/certs/11442312.pem (1708 bytes)
	I1209 05:59:16.168718 1462459 start.go:296] duration metric: took 153.46488ms for postStartSetup
	I1209 05:59:16.169092 1462459 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-132757
	I1209 05:59:16.185491 1462459 profile.go:143] Saving config to /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/kindnet-132757/config.json ...
	I1209 05:59:16.185782 1462459 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1209 05:59:16.185825 1462459 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-132757
	I1209 05:59:16.202510 1462459 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34225 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/kindnet-132757/id_rsa Username:docker}
	I1209 05:59:16.305125 1462459 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1209 05:59:16.309966 1462459 start.go:128] duration metric: took 10.057297938s to createHost
	I1209 05:59:16.310009 1462459 start.go:83] releasing machines lock for "kindnet-132757", held for 10.057440785s
	I1209 05:59:16.310083 1462459 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-132757
	I1209 05:59:16.326867 1462459 ssh_runner.go:195] Run: cat /version.json
	I1209 05:59:16.326924 1462459 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-132757
	I1209 05:59:16.326964 1462459 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 05:59:16.327028 1462459 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-132757
	I1209 05:59:16.349157 1462459 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34225 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/kindnet-132757/id_rsa Username:docker}
	I1209 05:59:16.353585 1462459 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34225 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/kindnet-132757/id_rsa Username:docker}
	I1209 05:59:16.547395 1462459 ssh_runner.go:195] Run: systemctl --version
	I1209 05:59:16.554038 1462459 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 05:59:16.558153 1462459 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 05:59:16.558248 1462459 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 05:59:16.585023 1462459 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1209 05:59:16.585054 1462459 start.go:496] detecting cgroup driver to use...
	I1209 05:59:16.585087 1462459 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1209 05:59:16.585141 1462459 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1209 05:59:16.599958 1462459 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1209 05:59:16.613197 1462459 docker.go:218] disabling cri-docker service (if available) ...
	I1209 05:59:16.613261 1462459 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 05:59:16.630596 1462459 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 05:59:16.648489 1462459 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 05:59:16.765893 1462459 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 05:59:16.887746 1462459 docker.go:234] disabling docker service ...
	I1209 05:59:16.887850 1462459 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 05:59:16.909139 1462459 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 05:59:16.922097 1462459 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 05:59:17.050925 1462459 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 05:59:17.170561 1462459 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 05:59:17.186012 1462459 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 05:59:17.200896 1462459 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1209 05:59:17.209928 1462459 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1209 05:59:17.218816 1462459 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1209 05:59:17.218926 1462459 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1209 05:59:17.228004 1462459 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1209 05:59:17.237136 1462459 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1209 05:59:17.246203 1462459 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1209 05:59:17.254832 1462459 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 05:59:17.262812 1462459 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1209 05:59:17.271584 1462459 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1209 05:59:17.279928 1462459 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1209 05:59:17.288777 1462459 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 05:59:17.296177 1462459 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 05:59:17.303229 1462459 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 05:59:17.417300 1462459 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1209 05:59:17.554853 1462459 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1209 05:59:17.554974 1462459 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1209 05:59:17.558773 1462459 start.go:564] Will wait 60s for crictl version
	I1209 05:59:17.558874 1462459 ssh_runner.go:195] Run: which crictl
	I1209 05:59:17.562127 1462459 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1209 05:59:17.589006 1462459 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1209 05:59:17.589102 1462459 ssh_runner.go:195] Run: containerd --version
	I1209 05:59:17.610058 1462459 ssh_runner.go:195] Run: containerd --version
	I1209 05:59:17.634190 1462459 out.go:179] * Preparing Kubernetes v1.34.2 on containerd 2.2.0 ...
	I1209 05:59:17.637057 1462459 cli_runner.go:164] Run: docker network inspect kindnet-132757 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1209 05:59:17.652577 1462459 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1209 05:59:17.656240 1462459 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 05:59:17.665447 1462459 kubeadm.go:884] updating cluster {Name:kindnet-132757 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:kindnet-132757 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 05:59:17.665583 1462459 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1209 05:59:17.665652 1462459 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 05:59:17.689282 1462459 containerd.go:627] all images are preloaded for containerd runtime.
	I1209 05:59:17.689307 1462459 containerd.go:534] Images already preloaded, skipping extraction
	I1209 05:59:17.689368 1462459 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 05:59:17.713085 1462459 containerd.go:627] all images are preloaded for containerd runtime.
	I1209 05:59:17.713108 1462459 cache_images.go:86] Images are preloaded, skipping loading
	I1209 05:59:17.713115 1462459 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.2 containerd true true} ...
	I1209 05:59:17.713211 1462459 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kindnet-132757 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:kindnet-132757 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet}
	I1209 05:59:17.713310 1462459 ssh_runner.go:195] Run: sudo crictl info
	I1209 05:59:17.742379 1462459 cni.go:84] Creating CNI manager for "kindnet"
	I1209 05:59:17.742460 1462459 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1209 05:59:17.742498 1462459 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kindnet-132757 NodeName:kindnet-132757 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1209 05:59:17.742664 1462459 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "kindnet-132757"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 05:59:17.742750 1462459 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1209 05:59:17.750075 1462459 binaries.go:51] Found k8s binaries, skipping transfer
	I1209 05:59:17.750141 1462459 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1209 05:59:17.757285 1462459 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I1209 05:59:17.769044 1462459 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1209 05:59:17.780907 1462459 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2227 bytes)
	I1209 05:59:17.793035 1462459 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1209 05:59:17.796771 1462459 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 05:59:17.806451 1462459 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 05:59:17.916523 1462459 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 05:59:17.931801 1462459 certs.go:69] Setting up /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/kindnet-132757 for IP: 192.168.76.2
	I1209 05:59:17.931823 1462459 certs.go:195] generating shared ca certs ...
	I1209 05:59:17.931839 1462459 certs.go:227] acquiring lock for ca certs: {Name:mk15788702f8c4e23b5aeab3f44961d296fab259 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 05:59:17.931970 1462459 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.key
	I1209 05:59:17.932080 1462459 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/proxy-client-ca.key
	I1209 05:59:17.932096 1462459 certs.go:257] generating profile certs ...
	I1209 05:59:17.932152 1462459 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/kindnet-132757/client.key
	I1209 05:59:17.932167 1462459 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/kindnet-132757/client.crt with IP's: []
	I1209 05:59:18.149157 1462459 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/kindnet-132757/client.crt ...
	I1209 05:59:18.149191 1462459 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/kindnet-132757/client.crt: {Name:mk50a8ad1b7e7bd883598800b650cbdca6469fdd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 05:59:18.149389 1462459 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/kindnet-132757/client.key ...
	I1209 05:59:18.149402 1462459 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/kindnet-132757/client.key: {Name:mkc96dbfe2e86c7f6e2cb58f5845a0b19fc59b39 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 05:59:18.149494 1462459 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/kindnet-132757/apiserver.key.d9657731
	I1209 05:59:18.149510 1462459 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/kindnet-132757/apiserver.crt.d9657731 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1209 05:59:18.618055 1462459 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/kindnet-132757/apiserver.crt.d9657731 ...
	I1209 05:59:18.618092 1462459 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/kindnet-132757/apiserver.crt.d9657731: {Name:mk272b38bc6dc4684df743708429f7770b76e47c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 05:59:18.618296 1462459 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/kindnet-132757/apiserver.key.d9657731 ...
	I1209 05:59:18.618310 1462459 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/kindnet-132757/apiserver.key.d9657731: {Name:mk0488d534ea034cbe8f2de456998b1fa316168e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 05:59:18.618395 1462459 certs.go:382] copying /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/kindnet-132757/apiserver.crt.d9657731 -> /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/kindnet-132757/apiserver.crt
	I1209 05:59:18.618486 1462459 certs.go:386] copying /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/kindnet-132757/apiserver.key.d9657731 -> /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/kindnet-132757/apiserver.key
	I1209 05:59:18.618546 1462459 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/kindnet-132757/proxy-client.key
	I1209 05:59:18.618564 1462459 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/kindnet-132757/proxy-client.crt with IP's: []
	I1209 05:59:18.680324 1462459 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/kindnet-132757/proxy-client.crt ...
	I1209 05:59:18.680348 1462459 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/kindnet-132757/proxy-client.crt: {Name:mkc518e533898efec36f2ae2e6e5ce8757fd528b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 05:59:18.680493 1462459 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/kindnet-132757/proxy-client.key ...
	I1209 05:59:18.680501 1462459 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/kindnet-132757/proxy-client.key: {Name:mk1323b71718a14993acaa6c5249c66a23abaab6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 05:59:18.680671 1462459 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/1144231.pem (1338 bytes)
	W1209 05:59:18.680709 1462459 certs.go:480] ignoring /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/1144231_empty.pem, impossibly tiny 0 bytes
	I1209 05:59:18.680718 1462459 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca-key.pem (1679 bytes)
	I1209 05:59:18.680745 1462459 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem (1078 bytes)
	I1209 05:59:18.680769 1462459 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/cert.pem (1123 bytes)
	I1209 05:59:18.680792 1462459 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/key.pem (1675 bytes)
	I1209 05:59:18.680834 1462459 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/ssl/certs/11442312.pem (1708 bytes)
	I1209 05:59:18.681396 1462459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 05:59:18.720902 1462459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1209 05:59:18.753020 1462459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 05:59:18.769723 1462459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1209 05:59:18.785937 1462459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/kindnet-132757/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1209 05:59:18.805755 1462459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/kindnet-132757/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1209 05:59:18.824094 1462459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/kindnet-132757/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 05:59:18.841628 1462459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/kindnet-132757/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1209 05:59:18.858373 1462459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/ssl/certs/11442312.pem --> /usr/share/ca-certificates/11442312.pem (1708 bytes)
	I1209 05:59:18.875268 1462459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 05:59:18.893054 1462459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/1144231.pem --> /usr/share/ca-certificates/1144231.pem (1338 bytes)
	I1209 05:59:18.909585 1462459 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 05:59:18.921921 1462459 ssh_runner.go:195] Run: openssl version
	I1209 05:59:18.927901 1462459 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/11442312.pem
	I1209 05:59:18.934988 1462459 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/11442312.pem /etc/ssl/certs/11442312.pem
	I1209 05:59:18.942476 1462459 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11442312.pem
	I1209 05:59:18.945985 1462459 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 04:18 /usr/share/ca-certificates/11442312.pem
	I1209 05:59:18.946096 1462459 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11442312.pem
	I1209 05:59:18.986846 1462459 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1209 05:59:18.994060 1462459 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/11442312.pem /etc/ssl/certs/3ec20f2e.0
	I1209 05:59:19.001108 1462459 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1209 05:59:19.011776 1462459 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1209 05:59:19.019383 1462459 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 05:59:19.023039 1462459 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 04:09 /usr/share/ca-certificates/minikubeCA.pem
	I1209 05:59:19.023108 1462459 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 05:59:19.064870 1462459 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1209 05:59:19.072269 1462459 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1209 05:59:19.079407 1462459 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1144231.pem
	I1209 05:59:19.086580 1462459 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1144231.pem /etc/ssl/certs/1144231.pem
	I1209 05:59:19.093909 1462459 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1144231.pem
	I1209 05:59:19.097739 1462459 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 04:18 /usr/share/ca-certificates/1144231.pem
	I1209 05:59:19.097805 1462459 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1144231.pem
	I1209 05:59:19.140065 1462459 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1209 05:59:19.147424 1462459 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/1144231.pem /etc/ssl/certs/51391683.0
	I1209 05:59:19.154372 1462459 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 05:59:19.157742 1462459 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1209 05:59:19.157804 1462459 kubeadm.go:401] StartCluster: {Name:kindnet-132757 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:kindnet-132757 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNam
es:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 05:59:19.157878 1462459 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1209 05:59:19.157929 1462459 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 05:59:19.183646 1462459 cri.go:89] found id: ""
	I1209 05:59:19.183731 1462459 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1209 05:59:19.191413 1462459 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 05:59:19.198785 1462459 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1209 05:59:19.198882 1462459 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 05:59:19.206379 1462459 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 05:59:19.206428 1462459 kubeadm.go:158] found existing configuration files:
	
	I1209 05:59:19.206481 1462459 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1209 05:59:19.213622 1462459 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 05:59:19.213685 1462459 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 05:59:19.220987 1462459 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1209 05:59:19.228410 1462459 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 05:59:19.228473 1462459 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 05:59:19.235508 1462459 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1209 05:59:19.242816 1462459 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 05:59:19.242886 1462459 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 05:59:19.249768 1462459 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1209 05:59:19.257165 1462459 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 05:59:19.257241 1462459 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 05:59:19.264187 1462459 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1209 05:59:19.335029 1462459 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1209 05:59:19.335359 1462459 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1209 05:59:19.396491 1462459 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1209 05:59:34.212434 1462459 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1209 05:59:34.212494 1462459 kubeadm.go:319] [preflight] Running pre-flight checks
	I1209 05:59:34.212598 1462459 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1209 05:59:34.212691 1462459 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1209 05:59:34.212739 1462459 kubeadm.go:319] OS: Linux
	I1209 05:59:34.212787 1462459 kubeadm.go:319] CGROUPS_CPU: enabled
	I1209 05:59:34.212836 1462459 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1209 05:59:34.212899 1462459 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1209 05:59:34.212960 1462459 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1209 05:59:34.213020 1462459 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1209 05:59:34.213074 1462459 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1209 05:59:34.213136 1462459 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1209 05:59:34.213190 1462459 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1209 05:59:34.213241 1462459 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1209 05:59:34.213324 1462459 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1209 05:59:34.213422 1462459 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1209 05:59:34.213517 1462459 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1209 05:59:34.213583 1462459 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1209 05:59:34.216538 1462459 out.go:252]   - Generating certificates and keys ...
	I1209 05:59:34.216635 1462459 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1209 05:59:34.216705 1462459 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1209 05:59:34.216776 1462459 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1209 05:59:34.216836 1462459 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1209 05:59:34.216907 1462459 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1209 05:59:34.216964 1462459 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1209 05:59:34.217023 1462459 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1209 05:59:34.217144 1462459 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [kindnet-132757 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1209 05:59:34.217201 1462459 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1209 05:59:34.217320 1462459 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [kindnet-132757 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1209 05:59:34.217389 1462459 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1209 05:59:34.217456 1462459 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1209 05:59:34.217503 1462459 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1209 05:59:34.217563 1462459 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1209 05:59:34.217617 1462459 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1209 05:59:34.217687 1462459 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1209 05:59:34.217747 1462459 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1209 05:59:34.217821 1462459 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1209 05:59:34.217879 1462459 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1209 05:59:34.217967 1462459 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1209 05:59:34.218036 1462459 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1209 05:59:34.221171 1462459 out.go:252]   - Booting up control plane ...
	I1209 05:59:34.221304 1462459 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1209 05:59:34.221399 1462459 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1209 05:59:34.221475 1462459 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1209 05:59:34.221592 1462459 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1209 05:59:34.221696 1462459 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1209 05:59:34.221815 1462459 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1209 05:59:34.221910 1462459 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1209 05:59:34.221955 1462459 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1209 05:59:34.222099 1462459 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1209 05:59:34.222214 1462459 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1209 05:59:34.222281 1462459 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001531778s
	I1209 05:59:34.222383 1462459 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1209 05:59:34.222472 1462459 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1209 05:59:34.222572 1462459 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1209 05:59:34.222658 1462459 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1209 05:59:34.222742 1462459 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.71670922s
	I1209 05:59:34.222818 1462459 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 5.130701257s
	I1209 05:59:34.222891 1462459 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 7.002272189s
	I1209 05:59:34.223009 1462459 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1209 05:59:34.223146 1462459 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1209 05:59:34.223211 1462459 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1209 05:59:34.223426 1462459 kubeadm.go:319] [mark-control-plane] Marking the node kindnet-132757 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1209 05:59:34.223490 1462459 kubeadm.go:319] [bootstrap-token] Using token: t5itg0.glr5io4kwnp1igbu
	I1209 05:59:34.226320 1462459 out.go:252]   - Configuring RBAC rules ...
	I1209 05:59:34.226438 1462459 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1209 05:59:34.226526 1462459 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1209 05:59:34.226678 1462459 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1209 05:59:34.226816 1462459 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1209 05:59:34.226965 1462459 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1209 05:59:34.227072 1462459 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1209 05:59:34.227210 1462459 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1209 05:59:34.227263 1462459 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1209 05:59:34.227313 1462459 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1209 05:59:34.227323 1462459 kubeadm.go:319] 
	I1209 05:59:34.227390 1462459 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1209 05:59:34.227397 1462459 kubeadm.go:319] 
	I1209 05:59:34.227485 1462459 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1209 05:59:34.227497 1462459 kubeadm.go:319] 
	I1209 05:59:34.227524 1462459 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1209 05:59:34.227583 1462459 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1209 05:59:34.227641 1462459 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1209 05:59:34.227650 1462459 kubeadm.go:319] 
	I1209 05:59:34.227708 1462459 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1209 05:59:34.227712 1462459 kubeadm.go:319] 
	I1209 05:59:34.227760 1462459 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1209 05:59:34.227763 1462459 kubeadm.go:319] 
	I1209 05:59:34.227824 1462459 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1209 05:59:34.227906 1462459 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1209 05:59:34.227995 1462459 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1209 05:59:34.228040 1462459 kubeadm.go:319] 
	I1209 05:59:34.228155 1462459 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1209 05:59:34.228257 1462459 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1209 05:59:34.228269 1462459 kubeadm.go:319] 
	I1209 05:59:34.228387 1462459 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token t5itg0.glr5io4kwnp1igbu \
	I1209 05:59:34.228499 1462459 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:98df943b2e6f85649a9af8e221693a225a3faf636e29a801d7cbe99d348eaf5d \
	I1209 05:59:34.228526 1462459 kubeadm.go:319] 	--control-plane 
	I1209 05:59:34.228534 1462459 kubeadm.go:319] 
	I1209 05:59:34.228660 1462459 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1209 05:59:34.228675 1462459 kubeadm.go:319] 
	I1209 05:59:34.228778 1462459 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token t5itg0.glr5io4kwnp1igbu \
	I1209 05:59:34.228912 1462459 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:98df943b2e6f85649a9af8e221693a225a3faf636e29a801d7cbe99d348eaf5d 
	I1209 05:59:34.228926 1462459 cni.go:84] Creating CNI manager for "kindnet"
	I1209 05:59:34.233784 1462459 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1209 05:59:34.236656 1462459 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1209 05:59:34.241619 1462459 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1209 05:59:34.241642 1462459 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1209 05:59:34.254304 1462459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1209 05:59:34.570010 1462459 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1209 05:59:34.570147 1462459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes kindnet-132757 minikube.k8s.io/updated_at=2025_12_09T05_59_34_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=604647ccc1f2cd4d60ec88f36255b328e04e507d minikube.k8s.io/name=kindnet-132757 minikube.k8s.io/primary=true
	I1209 05:59:34.570150 1462459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 05:59:34.818038 1462459 ops.go:34] apiserver oom_adj: -16
	I1209 05:59:34.818056 1462459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 05:59:35.318175 1462459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 05:59:35.818423 1462459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 05:59:36.318135 1462459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 05:59:36.818202 1462459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 05:59:37.318867 1462459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 05:59:37.818191 1462459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 05:59:38.318961 1462459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 05:59:38.818146 1462459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 05:59:39.044748 1462459 kubeadm.go:1114] duration metric: took 4.47466541s to wait for elevateKubeSystemPrivileges
	I1209 05:59:39.044783 1462459 kubeadm.go:403] duration metric: took 19.886989053s to StartCluster
	I1209 05:59:39.044802 1462459 settings.go:142] acquiring lock: {Name:mk8fa744e3d74bf8a1cbf5ac275c9f1969ad91a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 05:59:39.044879 1462459 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22081-1142328/kubeconfig
	I1209 05:59:39.045837 1462459 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1142328/kubeconfig: {Name:mk0dc127429f88dc4fdfb2d110deebc58207c1b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 05:59:39.046032 1462459 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1209 05:59:39.046114 1462459 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1209 05:59:39.046350 1462459 config.go:182] Loaded profile config "kindnet-132757": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1209 05:59:39.046391 1462459 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1209 05:59:39.046451 1462459 addons.go:70] Setting storage-provisioner=true in profile "kindnet-132757"
	I1209 05:59:39.046464 1462459 addons.go:239] Setting addon storage-provisioner=true in "kindnet-132757"
	I1209 05:59:39.046488 1462459 host.go:66] Checking if "kindnet-132757" exists ...
	I1209 05:59:39.047017 1462459 cli_runner.go:164] Run: docker container inspect kindnet-132757 --format={{.State.Status}}
	I1209 05:59:39.047528 1462459 addons.go:70] Setting default-storageclass=true in profile "kindnet-132757"
	I1209 05:59:39.047552 1462459 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "kindnet-132757"
	I1209 05:59:39.047811 1462459 cli_runner.go:164] Run: docker container inspect kindnet-132757 --format={{.State.Status}}
	I1209 05:59:39.051038 1462459 out.go:179] * Verifying Kubernetes components...
	I1209 05:59:39.054743 1462459 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 05:59:39.085248 1462459 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 05:59:39.088117 1462459 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 05:59:39.088139 1462459 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1209 05:59:39.088208 1462459 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-132757
	I1209 05:59:39.089702 1462459 addons.go:239] Setting addon default-storageclass=true in "kindnet-132757"
	I1209 05:59:39.089738 1462459 host.go:66] Checking if "kindnet-132757" exists ...
	I1209 05:59:39.090170 1462459 cli_runner.go:164] Run: docker container inspect kindnet-132757 --format={{.State.Status}}
	I1209 05:59:39.125803 1462459 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1209 05:59:39.125824 1462459 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1209 05:59:39.125884 1462459 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-132757
	I1209 05:59:39.127830 1462459 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34225 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/kindnet-132757/id_rsa Username:docker}
	I1209 05:59:39.158456 1462459 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34225 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/kindnet-132757/id_rsa Username:docker}
	I1209 05:59:39.370066 1462459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 05:59:39.437815 1462459 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1209 05:59:39.437928 1462459 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 05:59:39.454446 1462459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1209 05:59:40.400844 1462459 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.030748454s)
	I1209 05:59:40.401830 1462459 node_ready.go:35] waiting up to 15m0s for node "kindnet-132757" to be "Ready" ...
	I1209 05:59:40.402201 1462459 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1209 05:59:40.462932 1462459 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1209 05:59:40.466035 1462459 addons.go:530] duration metric: took 1.419626186s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1209 05:59:40.906341 1462459 kapi.go:214] "coredns" deployment in "kube-system" namespace and "kindnet-132757" context rescaled to 1 replicas
	W1209 05:59:42.404693 1462459 node_ready.go:57] node "kindnet-132757" has "Ready":"False" status (will retry)
	W1209 05:59:44.405438 1462459 node_ready.go:57] node "kindnet-132757" has "Ready":"False" status (will retry)
	W1209 05:59:46.905209 1462459 node_ready.go:57] node "kindnet-132757" has "Ready":"False" status (will retry)
	W1209 05:59:49.404687 1462459 node_ready.go:57] node "kindnet-132757" has "Ready":"False" status (will retry)
	I1209 05:59:50.406487 1462459 node_ready.go:49] node "kindnet-132757" is "Ready"
	I1209 05:59:50.406513 1462459 node_ready.go:38] duration metric: took 10.004657153s for node "kindnet-132757" to be "Ready" ...
	I1209 05:59:50.406527 1462459 api_server.go:52] waiting for apiserver process to appear ...
	I1209 05:59:50.406587 1462459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:59:50.425934 1462459 api_server.go:72] duration metric: took 11.379868802s to wait for apiserver process to appear ...
	I1209 05:59:50.425957 1462459 api_server.go:88] waiting for apiserver healthz status ...
	I1209 05:59:50.425975 1462459 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1209 05:59:50.435720 1462459 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1209 05:59:50.436785 1462459 api_server.go:141] control plane version: v1.34.2
	I1209 05:59:50.436830 1462459 api_server.go:131] duration metric: took 10.866191ms to wait for apiserver health ...
	I1209 05:59:50.436842 1462459 system_pods.go:43] waiting for kube-system pods to appear ...
	I1209 05:59:50.439691 1462459 system_pods.go:59] 8 kube-system pods found
	I1209 05:59:50.439725 1462459 system_pods.go:61] "coredns-66bc5c9577-d8ngg" [134ab9db-dc07-4636-b2e0-ccd6fd12f30a] Pending
	I1209 05:59:50.439731 1462459 system_pods.go:61] "etcd-kindnet-132757" [e48c7730-168a-41ef-b881-1113bfbcacab] Running
	I1209 05:59:50.439740 1462459 system_pods.go:61] "kindnet-fc72f" [97f4280d-3fb3-49b3-b4a7-676c7232f41d] Running
	I1209 05:59:50.439745 1462459 system_pods.go:61] "kube-apiserver-kindnet-132757" [aaa630fd-765f-4ec9-abcc-070bf1ae654e] Running
	I1209 05:59:50.439749 1462459 system_pods.go:61] "kube-controller-manager-kindnet-132757" [6f29a5ac-39a2-4fe4-8045-4216b8d4fde4] Running
	I1209 05:59:50.439756 1462459 system_pods.go:61] "kube-proxy-vhc24" [51f3abac-8ef5-47ab-835b-9b71b93f413d] Running
	I1209 05:59:50.439764 1462459 system_pods.go:61] "kube-scheduler-kindnet-132757" [6c097ca5-7d9d-4258-8b72-8d7c157662dc] Running
	I1209 05:59:50.439772 1462459 system_pods.go:61] "storage-provisioner" [3898cd03-51a6-4ccf-86f8-3251f4d69c4c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1209 05:59:50.439787 1462459 system_pods.go:74] duration metric: took 2.934927ms to wait for pod list to return data ...
	I1209 05:59:50.439796 1462459 default_sa.go:34] waiting for default service account to be created ...
	I1209 05:59:50.453631 1462459 default_sa.go:45] found service account: "default"
	I1209 05:59:50.453664 1462459 default_sa.go:55] duration metric: took 13.861317ms for default service account to be created ...
	I1209 05:59:50.453675 1462459 system_pods.go:116] waiting for k8s-apps to be running ...
	I1209 05:59:50.456843 1462459 system_pods.go:86] 8 kube-system pods found
	I1209 05:59:50.456938 1462459 system_pods.go:89] "coredns-66bc5c9577-d8ngg" [134ab9db-dc07-4636-b2e0-ccd6fd12f30a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1209 05:59:50.456961 1462459 system_pods.go:89] "etcd-kindnet-132757" [e48c7730-168a-41ef-b881-1113bfbcacab] Running
	I1209 05:59:50.457000 1462459 system_pods.go:89] "kindnet-fc72f" [97f4280d-3fb3-49b3-b4a7-676c7232f41d] Running
	I1209 05:59:50.457026 1462459 system_pods.go:89] "kube-apiserver-kindnet-132757" [aaa630fd-765f-4ec9-abcc-070bf1ae654e] Running
	I1209 05:59:50.457046 1462459 system_pods.go:89] "kube-controller-manager-kindnet-132757" [6f29a5ac-39a2-4fe4-8045-4216b8d4fde4] Running
	I1209 05:59:50.457079 1462459 system_pods.go:89] "kube-proxy-vhc24" [51f3abac-8ef5-47ab-835b-9b71b93f413d] Running
	I1209 05:59:50.457101 1462459 system_pods.go:89] "kube-scheduler-kindnet-132757" [6c097ca5-7d9d-4258-8b72-8d7c157662dc] Running
	I1209 05:59:50.457121 1462459 system_pods.go:89] "storage-provisioner" [3898cd03-51a6-4ccf-86f8-3251f4d69c4c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1209 05:59:50.457168 1462459 retry.go:31] will retry after 227.210535ms: missing components: kube-dns
	I1209 05:59:50.687768 1462459 system_pods.go:86] 8 kube-system pods found
	I1209 05:59:50.687804 1462459 system_pods.go:89] "coredns-66bc5c9577-d8ngg" [134ab9db-dc07-4636-b2e0-ccd6fd12f30a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1209 05:59:50.687811 1462459 system_pods.go:89] "etcd-kindnet-132757" [e48c7730-168a-41ef-b881-1113bfbcacab] Running
	I1209 05:59:50.687852 1462459 system_pods.go:89] "kindnet-fc72f" [97f4280d-3fb3-49b3-b4a7-676c7232f41d] Running
	I1209 05:59:50.687872 1462459 system_pods.go:89] "kube-apiserver-kindnet-132757" [aaa630fd-765f-4ec9-abcc-070bf1ae654e] Running
	I1209 05:59:50.687877 1462459 system_pods.go:89] "kube-controller-manager-kindnet-132757" [6f29a5ac-39a2-4fe4-8045-4216b8d4fde4] Running
	I1209 05:59:50.687882 1462459 system_pods.go:89] "kube-proxy-vhc24" [51f3abac-8ef5-47ab-835b-9b71b93f413d] Running
	I1209 05:59:50.687886 1462459 system_pods.go:89] "kube-scheduler-kindnet-132757" [6c097ca5-7d9d-4258-8b72-8d7c157662dc] Running
	I1209 05:59:50.687898 1462459 system_pods.go:89] "storage-provisioner" [3898cd03-51a6-4ccf-86f8-3251f4d69c4c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1209 05:59:50.687929 1462459 retry.go:31] will retry after 373.66933ms: missing components: kube-dns
	I1209 05:59:51.066503 1462459 system_pods.go:86] 8 kube-system pods found
	I1209 05:59:51.066552 1462459 system_pods.go:89] "coredns-66bc5c9577-d8ngg" [134ab9db-dc07-4636-b2e0-ccd6fd12f30a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1209 05:59:51.066599 1462459 system_pods.go:89] "etcd-kindnet-132757" [e48c7730-168a-41ef-b881-1113bfbcacab] Running
	I1209 05:59:51.066614 1462459 system_pods.go:89] "kindnet-fc72f" [97f4280d-3fb3-49b3-b4a7-676c7232f41d] Running
	I1209 05:59:51.066620 1462459 system_pods.go:89] "kube-apiserver-kindnet-132757" [aaa630fd-765f-4ec9-abcc-070bf1ae654e] Running
	I1209 05:59:51.066624 1462459 system_pods.go:89] "kube-controller-manager-kindnet-132757" [6f29a5ac-39a2-4fe4-8045-4216b8d4fde4] Running
	I1209 05:59:51.066640 1462459 system_pods.go:89] "kube-proxy-vhc24" [51f3abac-8ef5-47ab-835b-9b71b93f413d] Running
	I1209 05:59:51.066660 1462459 system_pods.go:89] "kube-scheduler-kindnet-132757" [6c097ca5-7d9d-4258-8b72-8d7c157662dc] Running
	I1209 05:59:51.066673 1462459 system_pods.go:89] "storage-provisioner" [3898cd03-51a6-4ccf-86f8-3251f4d69c4c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1209 05:59:51.066698 1462459 retry.go:31] will retry after 428.448447ms: missing components: kube-dns
	I1209 05:59:51.499325 1462459 system_pods.go:86] 8 kube-system pods found
	I1209 05:59:51.499792 1462459 system_pods.go:89] "coredns-66bc5c9577-d8ngg" [134ab9db-dc07-4636-b2e0-ccd6fd12f30a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1209 05:59:51.499800 1462459 system_pods.go:89] "etcd-kindnet-132757" [e48c7730-168a-41ef-b881-1113bfbcacab] Running
	I1209 05:59:51.499808 1462459 system_pods.go:89] "kindnet-fc72f" [97f4280d-3fb3-49b3-b4a7-676c7232f41d] Running
	I1209 05:59:51.499820 1462459 system_pods.go:89] "kube-apiserver-kindnet-132757" [aaa630fd-765f-4ec9-abcc-070bf1ae654e] Running
	I1209 05:59:51.499825 1462459 system_pods.go:89] "kube-controller-manager-kindnet-132757" [6f29a5ac-39a2-4fe4-8045-4216b8d4fde4] Running
	I1209 05:59:51.499830 1462459 system_pods.go:89] "kube-proxy-vhc24" [51f3abac-8ef5-47ab-835b-9b71b93f413d] Running
	I1209 05:59:51.499835 1462459 system_pods.go:89] "kube-scheduler-kindnet-132757" [6c097ca5-7d9d-4258-8b72-8d7c157662dc] Running
	I1209 05:59:51.499855 1462459 system_pods.go:89] "storage-provisioner" [3898cd03-51a6-4ccf-86f8-3251f4d69c4c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1209 05:59:51.499870 1462459 retry.go:31] will retry after 525.21466ms: missing components: kube-dns
	I1209 05:59:52.029403 1462459 system_pods.go:86] 8 kube-system pods found
	I1209 05:59:52.029438 1462459 system_pods.go:89] "coredns-66bc5c9577-d8ngg" [134ab9db-dc07-4636-b2e0-ccd6fd12f30a] Running
	I1209 05:59:52.029446 1462459 system_pods.go:89] "etcd-kindnet-132757" [e48c7730-168a-41ef-b881-1113bfbcacab] Running
	I1209 05:59:52.029450 1462459 system_pods.go:89] "kindnet-fc72f" [97f4280d-3fb3-49b3-b4a7-676c7232f41d] Running
	I1209 05:59:52.029455 1462459 system_pods.go:89] "kube-apiserver-kindnet-132757" [aaa630fd-765f-4ec9-abcc-070bf1ae654e] Running
	I1209 05:59:52.029459 1462459 system_pods.go:89] "kube-controller-manager-kindnet-132757" [6f29a5ac-39a2-4fe4-8045-4216b8d4fde4] Running
	I1209 05:59:52.029464 1462459 system_pods.go:89] "kube-proxy-vhc24" [51f3abac-8ef5-47ab-835b-9b71b93f413d] Running
	I1209 05:59:52.029468 1462459 system_pods.go:89] "kube-scheduler-kindnet-132757" [6c097ca5-7d9d-4258-8b72-8d7c157662dc] Running
	I1209 05:59:52.029471 1462459 system_pods.go:89] "storage-provisioner" [3898cd03-51a6-4ccf-86f8-3251f4d69c4c] Running
	I1209 05:59:52.029480 1462459 system_pods.go:126] duration metric: took 1.575799057s to wait for k8s-apps to be running ...
	I1209 05:59:52.029491 1462459 system_svc.go:44] waiting for kubelet service to be running ....
	I1209 05:59:52.029553 1462459 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 05:59:52.044532 1462459 system_svc.go:56] duration metric: took 15.030788ms WaitForService to wait for kubelet
	I1209 05:59:52.044564 1462459 kubeadm.go:587] duration metric: took 12.998503911s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 05:59:52.044584 1462459 node_conditions.go:102] verifying NodePressure condition ...
	I1209 05:59:52.047626 1462459 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1209 05:59:52.047660 1462459 node_conditions.go:123] node cpu capacity is 2
	I1209 05:59:52.047673 1462459 node_conditions.go:105] duration metric: took 3.083951ms to run NodePressure ...
	I1209 05:59:52.047687 1462459 start.go:242] waiting for startup goroutines ...
	I1209 05:59:52.047696 1462459 start.go:247] waiting for cluster config update ...
	I1209 05:59:52.047718 1462459 start.go:256] writing updated cluster config ...
	I1209 05:59:52.048112 1462459 ssh_runner.go:195] Run: rm -f paused
	I1209 05:59:52.051941 1462459 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1209 05:59:52.055849 1462459 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-d8ngg" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 05:59:52.061442 1462459 pod_ready.go:94] pod "coredns-66bc5c9577-d8ngg" is "Ready"
	I1209 05:59:52.061473 1462459 pod_ready.go:86] duration metric: took 5.593694ms for pod "coredns-66bc5c9577-d8ngg" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 05:59:52.064055 1462459 pod_ready.go:83] waiting for pod "etcd-kindnet-132757" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 05:59:52.070084 1462459 pod_ready.go:94] pod "etcd-kindnet-132757" is "Ready"
	I1209 05:59:52.070115 1462459 pod_ready.go:86] duration metric: took 6.031029ms for pod "etcd-kindnet-132757" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 05:59:52.073290 1462459 pod_ready.go:83] waiting for pod "kube-apiserver-kindnet-132757" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 05:59:52.078451 1462459 pod_ready.go:94] pod "kube-apiserver-kindnet-132757" is "Ready"
	I1209 05:59:52.078482 1462459 pod_ready.go:86] duration metric: took 5.163193ms for pod "kube-apiserver-kindnet-132757" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 05:59:52.081101 1462459 pod_ready.go:83] waiting for pod "kube-controller-manager-kindnet-132757" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 05:59:52.456445 1462459 pod_ready.go:94] pod "kube-controller-manager-kindnet-132757" is "Ready"
	I1209 05:59:52.456513 1462459 pod_ready.go:86] duration metric: took 375.383607ms for pod "kube-controller-manager-kindnet-132757" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 05:59:52.657129 1462459 pod_ready.go:83] waiting for pod "kube-proxy-vhc24" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 05:59:53.055907 1462459 pod_ready.go:94] pod "kube-proxy-vhc24" is "Ready"
	I1209 05:59:53.055942 1462459 pod_ready.go:86] duration metric: took 398.786565ms for pod "kube-proxy-vhc24" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 05:59:53.256402 1462459 pod_ready.go:83] waiting for pod "kube-scheduler-kindnet-132757" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 05:59:53.656674 1462459 pod_ready.go:94] pod "kube-scheduler-kindnet-132757" is "Ready"
	I1209 05:59:53.656703 1462459 pod_ready.go:86] duration metric: took 400.273894ms for pod "kube-scheduler-kindnet-132757" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 05:59:53.656717 1462459 pod_ready.go:40] duration metric: took 1.604744294s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1209 05:59:53.707658 1462459 start.go:625] kubectl: 1.33.2, cluster: 1.34.2 (minor skew: 1)
	I1209 05:59:53.711825 1462459 out.go:179] * Done! kubectl is now configured to use "kindnet-132757" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 09 05:45:25 no-preload-842269 containerd[555]: time="2025-12-09T05:45:25.686544286Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 09 05:45:25 no-preload-842269 containerd[555]: time="2025-12-09T05:45:25.686621568Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 09 05:45:25 no-preload-842269 containerd[555]: time="2025-12-09T05:45:25.686720651Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 09 05:45:25 no-preload-842269 containerd[555]: time="2025-12-09T05:45:25.686789392Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 09 05:45:25 no-preload-842269 containerd[555]: time="2025-12-09T05:45:25.686856097Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 09 05:45:25 no-preload-842269 containerd[555]: time="2025-12-09T05:45:25.686918545Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 09 05:45:25 no-preload-842269 containerd[555]: time="2025-12-09T05:45:25.686973706Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 09 05:45:25 no-preload-842269 containerd[555]: time="2025-12-09T05:45:25.687041799Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 09 05:45:25 no-preload-842269 containerd[555]: time="2025-12-09T05:45:25.687108406Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 09 05:45:25 no-preload-842269 containerd[555]: time="2025-12-09T05:45:25.687193261Z" level=info msg="Connect containerd service"
	Dec 09 05:45:25 no-preload-842269 containerd[555]: time="2025-12-09T05:45:25.687520145Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 09 05:45:25 no-preload-842269 containerd[555]: time="2025-12-09T05:45:25.688289092Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 09 05:45:25 no-preload-842269 containerd[555]: time="2025-12-09T05:45:25.699337805Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 09 05:45:25 no-preload-842269 containerd[555]: time="2025-12-09T05:45:25.699416343Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 09 05:45:25 no-preload-842269 containerd[555]: time="2025-12-09T05:45:25.699485994Z" level=info msg="Start subscribing containerd event"
	Dec 09 05:45:25 no-preload-842269 containerd[555]: time="2025-12-09T05:45:25.700731392Z" level=info msg="Start recovering state"
	Dec 09 05:45:25 no-preload-842269 containerd[555]: time="2025-12-09T05:45:25.726934659Z" level=info msg="Start event monitor"
	Dec 09 05:45:25 no-preload-842269 containerd[555]: time="2025-12-09T05:45:25.727028597Z" level=info msg="Start cni network conf syncer for default"
	Dec 09 05:45:25 no-preload-842269 containerd[555]: time="2025-12-09T05:45:25.727048600Z" level=info msg="Start streaming server"
	Dec 09 05:45:25 no-preload-842269 containerd[555]: time="2025-12-09T05:45:25.727060752Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 09 05:45:25 no-preload-842269 containerd[555]: time="2025-12-09T05:45:25.727107495Z" level=info msg="runtime interface starting up..."
	Dec 09 05:45:25 no-preload-842269 containerd[555]: time="2025-12-09T05:45:25.727114871Z" level=info msg="starting plugins..."
	Dec 09 05:45:25 no-preload-842269 containerd[555]: time="2025-12-09T05:45:25.727324515Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 09 05:45:25 no-preload-842269 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 09 05:45:25 no-preload-842269 containerd[555]: time="2025-12-09T05:45:25.730766873Z" level=info msg="containerd successfully booted in 0.068739s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 06:00:30.963924    7948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 06:00:30.965019    7948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 06:00:30.965324    7948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 06:00:30.968916    7948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 06:00:30.969224    7948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +25.904254] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:14] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:16] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:18] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:19] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:30] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:32] overlayfs: idmapped layers are currently not supported
	[ +28.114653] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:33] overlayfs: idmapped layers are currently not supported
	[ +23.720849] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:34] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:35] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:36] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:37] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:38] overlayfs: idmapped layers are currently not supported
	[ +23.656275] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:39] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:57] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:58] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:00] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:02] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:03] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:05] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:06] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 9 05:31] overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	
	
	==> kernel <==
	 06:00:31 up  8:42,  0 user,  load average: 1.71, 1.21, 1.22
	Linux no-preload-842269 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 09 06:00:27 no-preload-842269 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 09 06:00:28 no-preload-842269 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1202.
	Dec 09 06:00:28 no-preload-842269 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 06:00:28 no-preload-842269 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 06:00:28 no-preload-842269 kubelet[7819]: E1209 06:00:28.737285    7819 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 09 06:00:28 no-preload-842269 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 09 06:00:28 no-preload-842269 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 09 06:00:29 no-preload-842269 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1203.
	Dec 09 06:00:29 no-preload-842269 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 06:00:29 no-preload-842269 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 06:00:29 no-preload-842269 kubelet[7824]: E1209 06:00:29.527495    7824 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 09 06:00:29 no-preload-842269 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 09 06:00:29 no-preload-842269 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 09 06:00:30 no-preload-842269 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1204.
	Dec 09 06:00:30 no-preload-842269 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 06:00:30 no-preload-842269 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 06:00:30 no-preload-842269 kubelet[7859]: E1209 06:00:30.272620    7859 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 09 06:00:30 no-preload-842269 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 09 06:00:30 no-preload-842269 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 09 06:00:30 no-preload-842269 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1205.
	Dec 09 06:00:30 no-preload-842269 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 06:00:30 no-preload-842269 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 06:00:31 no-preload-842269 kubelet[7953]: E1209 06:00:31.009819    7953 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 09 06:00:31 no-preload-842269 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 09 06:00:31 no-preload-842269 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-842269 -n no-preload-842269
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-842269 -n no-preload-842269: exit status 2 (418.70036ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "no-preload-842269" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (542.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (9.14s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-262540 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-262540 -n newest-cni-262540
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-262540 -n newest-cni-262540: exit status 2 (303.39441ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: post-pause apiserver status = "Stopped"; want = "Paused"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-262540 -n newest-cni-262540
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-262540 -n newest-cni-262540: exit status 2 (310.096726ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-262540 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-262540 -n newest-cni-262540
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-262540 -n newest-cni-262540: exit status 2 (308.773139ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: post-unpause apiserver status = "Stopped"; want = "Running"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-262540 -n newest-cni-262540
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-262540 -n newest-cni-262540: exit status 2 (320.354023ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: post-unpause kubelet status = "Stopped"; want = "Running"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-262540
helpers_test.go:243: (dbg) docker inspect newest-cni-262540:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ed3de5d59c962edb9b6bef3201cdaec7fe174aa520c616b7fefbc3014b60f5d7",
	        "Created": "2025-12-09T05:40:46.656747886Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1437242,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-09T05:50:48.635687357Z",
	            "FinishedAt": "2025-12-09T05:50:47.310180166Z"
	        },
	        "Image": "sha256:e4eb91ed18a24161fce60c7cdd660144ecd5b8c5029dc2dea2c5e423c2f48ce4",
	        "ResolvConfPath": "/var/lib/docker/containers/ed3de5d59c962edb9b6bef3201cdaec7fe174aa520c616b7fefbc3014b60f5d7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ed3de5d59c962edb9b6bef3201cdaec7fe174aa520c616b7fefbc3014b60f5d7/hostname",
	        "HostsPath": "/var/lib/docker/containers/ed3de5d59c962edb9b6bef3201cdaec7fe174aa520c616b7fefbc3014b60f5d7/hosts",
	        "LogPath": "/var/lib/docker/containers/ed3de5d59c962edb9b6bef3201cdaec7fe174aa520c616b7fefbc3014b60f5d7/ed3de5d59c962edb9b6bef3201cdaec7fe174aa520c616b7fefbc3014b60f5d7-json.log",
	        "Name": "/newest-cni-262540",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-262540:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-262540",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ed3de5d59c962edb9b6bef3201cdaec7fe174aa520c616b7fefbc3014b60f5d7",
	                "LowerDir": "/var/lib/docker/overlay2/6415c7c451ba9f1315edabee60b733d482ef79cadbd27911dea3809654b6c1ec-init/diff:/var/lib/docker/overlay2/c44bb57aa59cc265266f37f2bb6e7ec0e7d641c3b4aeaa57e6d23deec6f0d1d4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6415c7c451ba9f1315edabee60b733d482ef79cadbd27911dea3809654b6c1ec/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6415c7c451ba9f1315edabee60b733d482ef79cadbd27911dea3809654b6c1ec/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6415c7c451ba9f1315edabee60b733d482ef79cadbd27911dea3809654b6c1ec/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-262540",
	                "Source": "/var/lib/docker/volumes/newest-cni-262540/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-262540",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-262540",
	                "name.minikube.sigs.k8s.io": "newest-cni-262540",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5ef6b7780104cfde91a86dd0f42d780a7d42fd9d965a232761225f3bafa31a2e",
	            "SandboxKey": "/var/run/docker/netns/5ef6b7780104",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34215"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34216"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34219"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34217"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34218"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-262540": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "92:d2:57:f6:4e:32",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "aa89e26051ba524ceb1352e47e7602df84b3dfd74bbc435c72069a1036fceebf",
	                    "EndpointID": "79808c0b2bead60a0d6333b887aa13d7b302f422db688969b287245b73727791",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-262540",
	                        "ed3de5d59c96"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-262540 -n newest-cni-262540
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-262540 -n newest-cni-262540: exit status 2 (317.148917ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-262540 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-262540 logs -n 25: (1.552230932s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─
────────────────────┐
	│ COMMAND │                                                                                                                            ARGS                                                                                                                            │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─
────────────────────┤
	│ unpause │ -p embed-certs-432108 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-432108           │ jenkins │ v1.37.0 │ 09 Dec 25 05:37 UTC │ 09 Dec 25 05:37 UTC │
	│ delete  │ -p embed-certs-432108                                                                                                                                                                                                                                      │ embed-certs-432108           │ jenkins │ v1.37.0 │ 09 Dec 25 05:37 UTC │ 09 Dec 25 05:37 UTC │
	│ delete  │ -p embed-certs-432108                                                                                                                                                                                                                                      │ embed-certs-432108           │ jenkins │ v1.37.0 │ 09 Dec 25 05:37 UTC │ 09 Dec 25 05:37 UTC │
	│ start   │ -p default-k8s-diff-port-564611 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-564611 │ jenkins │ v1.37.0 │ 09 Dec 25 05:37 UTC │ 09 Dec 25 05:39 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-564611 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                         │ default-k8s-diff-port-564611 │ jenkins │ v1.37.0 │ 09 Dec 25 05:39 UTC │ 09 Dec 25 05:39 UTC │
	│ stop    │ -p default-k8s-diff-port-564611 --alsologtostderr -v=3                                                                                                                                                                                                     │ default-k8s-diff-port-564611 │ jenkins │ v1.37.0 │ 09 Dec 25 05:39 UTC │ 09 Dec 25 05:39 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-564611 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ default-k8s-diff-port-564611 │ jenkins │ v1.37.0 │ 09 Dec 25 05:39 UTC │ 09 Dec 25 05:39 UTC │
	│ start   │ -p default-k8s-diff-port-564611 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-564611 │ jenkins │ v1.37.0 │ 09 Dec 25 05:39 UTC │ 09 Dec 25 05:40 UTC │
	│ image   │ default-k8s-diff-port-564611 image list --format=json                                                                                                                                                                                                      │ default-k8s-diff-port-564611 │ jenkins │ v1.37.0 │ 09 Dec 25 05:40 UTC │ 09 Dec 25 05:40 UTC │
	│ pause   │ -p default-k8s-diff-port-564611 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-564611 │ jenkins │ v1.37.0 │ 09 Dec 25 05:40 UTC │ 09 Dec 25 05:40 UTC │
	│ unpause │ -p default-k8s-diff-port-564611 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-564611 │ jenkins │ v1.37.0 │ 09 Dec 25 05:40 UTC │ 09 Dec 25 05:40 UTC │
	│ delete  │ -p default-k8s-diff-port-564611                                                                                                                                                                                                                            │ default-k8s-diff-port-564611 │ jenkins │ v1.37.0 │ 09 Dec 25 05:40 UTC │ 09 Dec 25 05:40 UTC │
	│ delete  │ -p default-k8s-diff-port-564611                                                                                                                                                                                                                            │ default-k8s-diff-port-564611 │ jenkins │ v1.37.0 │ 09 Dec 25 05:40 UTC │ 09 Dec 25 05:40 UTC │
	│ start   │ -p newest-cni-262540 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-262540            │ jenkins │ v1.37.0 │ 09 Dec 25 05:40 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-842269 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ no-preload-842269            │ jenkins │ v1.37.0 │ 09 Dec 25 05:43 UTC │                     │
	│ stop    │ -p no-preload-842269 --alsologtostderr -v=3                                                                                                                                                                                                                │ no-preload-842269            │ jenkins │ v1.37.0 │ 09 Dec 25 05:45 UTC │ 09 Dec 25 05:45 UTC │
	│ addons  │ enable dashboard -p no-preload-842269 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                               │ no-preload-842269            │ jenkins │ v1.37.0 │ 09 Dec 25 05:45 UTC │ 09 Dec 25 05:45 UTC │
	│ start   │ -p no-preload-842269 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-842269            │ jenkins │ v1.37.0 │ 09 Dec 25 05:45 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-262540 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ newest-cni-262540            │ jenkins │ v1.37.0 │ 09 Dec 25 05:49 UTC │                     │
	│ stop    │ -p newest-cni-262540 --alsologtostderr -v=3                                                                                                                                                                                                                │ newest-cni-262540            │ jenkins │ v1.37.0 │ 09 Dec 25 05:50 UTC │ 09 Dec 25 05:50 UTC │
	│ addons  │ enable dashboard -p newest-cni-262540 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                               │ newest-cni-262540            │ jenkins │ v1.37.0 │ 09 Dec 25 05:50 UTC │ 09 Dec 25 05:50 UTC │
	│ start   │ -p newest-cni-262540 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-262540            │ jenkins │ v1.37.0 │ 09 Dec 25 05:50 UTC │                     │
	│ image   │ newest-cni-262540 image list --format=json                                                                                                                                                                                                                 │ newest-cni-262540            │ jenkins │ v1.37.0 │ 09 Dec 25 05:57 UTC │ 09 Dec 25 05:57 UTC │
	│ pause   │ -p newest-cni-262540 --alsologtostderr -v=1                                                                                                                                                                                                                │ newest-cni-262540            │ jenkins │ v1.37.0 │ 09 Dec 25 05:57 UTC │ 09 Dec 25 05:57 UTC │
	│ unpause │ -p newest-cni-262540 --alsologtostderr -v=1                                                                                                                                                                                                                │ newest-cni-262540            │ jenkins │ v1.37.0 │ 09 Dec 25 05:57 UTC │ 09 Dec 25 05:57 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─
────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/09 05:50:48
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 05:50:48.368732 1437114 out.go:360] Setting OutFile to fd 1 ...
	I1209 05:50:48.368913 1437114 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 05:50:48.368940 1437114 out.go:374] Setting ErrFile to fd 2...
	I1209 05:50:48.368958 1437114 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 05:50:48.369216 1437114 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-1142328/.minikube/bin
	I1209 05:50:48.369601 1437114 out.go:368] Setting JSON to false
	I1209 05:50:48.370536 1437114 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":30772,"bootTime":1765228677,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1209 05:50:48.370622 1437114 start.go:143] virtualization:  
	I1209 05:50:48.373806 1437114 out.go:179] * [newest-cni-262540] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1209 05:50:48.377517 1437114 out.go:179]   - MINIKUBE_LOCATION=22081
	I1209 05:50:48.377579 1437114 notify.go:221] Checking for updates...
	I1209 05:50:48.383314 1437114 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 05:50:48.386284 1437114 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22081-1142328/kubeconfig
	I1209 05:50:48.389132 1437114 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-1142328/.minikube
	I1209 05:50:48.392076 1437114 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1209 05:50:48.394975 1437114 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 05:50:48.398361 1437114 config.go:182] Loaded profile config "newest-cni-262540": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1209 05:50:48.398977 1437114 driver.go:422] Setting default libvirt URI to qemu:///system
	I1209 05:50:48.429565 1437114 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1209 05:50:48.429674 1437114 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 05:50:48.493190 1437114 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-09 05:50:48.483865172 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1209 05:50:48.493298 1437114 docker.go:319] overlay module found
	I1209 05:50:48.496461 1437114 out.go:179] * Using the docker driver based on existing profile
	I1209 05:50:48.499256 1437114 start.go:309] selected driver: docker
	I1209 05:50:48.499276 1437114 start.go:927] validating driver "docker" against &{Name:newest-cni-262540 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-262540 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString
: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 05:50:48.499393 1437114 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 05:50:48.500188 1437114 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 05:50:48.552839 1437114 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-09 05:50:48.544121972 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1209 05:50:48.553181 1437114 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1209 05:50:48.553214 1437114 cni.go:84] Creating CNI manager for ""
	I1209 05:50:48.553271 1437114 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1209 05:50:48.553312 1437114 start.go:353] cluster config:
	{Name:newest-cni-262540 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-262540 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 05:50:48.558270 1437114 out.go:179] * Starting "newest-cni-262540" primary control-plane node in "newest-cni-262540" cluster
	I1209 05:50:48.560987 1437114 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1209 05:50:48.563913 1437114 out.go:179] * Pulling base image v0.0.48-1765184860-22066 ...
	I1209 05:50:48.566628 1437114 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1209 05:50:48.566677 1437114 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1209 05:50:48.566701 1437114 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local docker daemon
	I1209 05:50:48.566709 1437114 cache.go:65] Caching tarball of preloaded images
	I1209 05:50:48.566793 1437114 preload.go:238] Found /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1209 05:50:48.566803 1437114 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1209 05:50:48.566914 1437114 profile.go:143] Saving config to /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/config.json ...
	I1209 05:50:48.585366 1437114 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local docker daemon, skipping pull
	I1209 05:50:48.585390 1437114 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c exists in daemon, skipping load
	I1209 05:50:48.585410 1437114 cache.go:243] Successfully downloaded all kic artifacts
	I1209 05:50:48.585447 1437114 start.go:360] acquireMachinesLock for newest-cni-262540: {Name:mk272d84ff1bc8c8949f2f0b1f608a7519899d10 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 05:50:48.585504 1437114 start.go:364] duration metric: took 35.806µs to acquireMachinesLock for "newest-cni-262540"
	I1209 05:50:48.585529 1437114 start.go:96] Skipping create...Using existing machine configuration
	I1209 05:50:48.585539 1437114 fix.go:54] fixHost starting: 
	I1209 05:50:48.585799 1437114 cli_runner.go:164] Run: docker container inspect newest-cni-262540 --format={{.State.Status}}
	I1209 05:50:48.601614 1437114 fix.go:112] recreateIfNeeded on newest-cni-262540: state=Stopped err=<nil>
	W1209 05:50:48.601645 1437114 fix.go:138] unexpected machine state, will restart: <nil>
	W1209 05:50:45.187180 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:50:47.684513 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	I1209 05:50:48.604910 1437114 out.go:252] * Restarting existing docker container for "newest-cni-262540" ...
	I1209 05:50:48.604997 1437114 cli_runner.go:164] Run: docker start newest-cni-262540
	I1209 05:50:48.871934 1437114 cli_runner.go:164] Run: docker container inspect newest-cni-262540 --format={{.State.Status}}
	I1209 05:50:48.896820 1437114 kic.go:430] container "newest-cni-262540" state is running.
	I1209 05:50:48.898586 1437114 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-262540
	I1209 05:50:48.919622 1437114 profile.go:143] Saving config to /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/config.json ...
	I1209 05:50:48.919952 1437114 machine.go:94] provisionDockerMachine start ...
	I1209 05:50:48.920090 1437114 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-262540
	I1209 05:50:48.944382 1437114 main.go:143] libmachine: Using SSH client type: native
	I1209 05:50:48.944721 1437114 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 34215 <nil> <nil>}
	I1209 05:50:48.944730 1437114 main.go:143] libmachine: About to run SSH command:
	hostname
	I1209 05:50:48.945423 1437114 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:54144->127.0.0.1:34215: read: connection reset by peer
	I1209 05:50:52.103931 1437114 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-262540
	
	I1209 05:50:52.103958 1437114 ubuntu.go:182] provisioning hostname "newest-cni-262540"
	I1209 05:50:52.104072 1437114 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-262540
	I1209 05:50:52.121462 1437114 main.go:143] libmachine: Using SSH client type: native
	I1209 05:50:52.121778 1437114 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 34215 <nil> <nil>}
	I1209 05:50:52.121795 1437114 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-262540 && echo "newest-cni-262540" | sudo tee /etc/hostname
	I1209 05:50:52.280621 1437114 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-262540
	
	I1209 05:50:52.280705 1437114 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-262540
	I1209 05:50:52.301681 1437114 main.go:143] libmachine: Using SSH client type: native
	I1209 05:50:52.301997 1437114 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 34215 <nil> <nil>}
	I1209 05:50:52.302019 1437114 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-262540' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-262540/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-262540' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 05:50:52.452274 1437114 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1209 05:50:52.452304 1437114 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22081-1142328/.minikube CaCertPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22081-1142328/.minikube}
	I1209 05:50:52.452324 1437114 ubuntu.go:190] setting up certificates
	I1209 05:50:52.452332 1437114 provision.go:84] configureAuth start
	I1209 05:50:52.452391 1437114 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-262540
	I1209 05:50:52.475825 1437114 provision.go:143] copyHostCerts
	I1209 05:50:52.475907 1437114 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.pem, removing ...
	I1209 05:50:52.475921 1437114 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.pem
	I1209 05:50:52.475999 1437114 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.pem (1078 bytes)
	I1209 05:50:52.476136 1437114 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-1142328/.minikube/cert.pem, removing ...
	I1209 05:50:52.476147 1437114 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-1142328/.minikube/cert.pem
	I1209 05:50:52.476175 1437114 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22081-1142328/.minikube/cert.pem (1123 bytes)
	I1209 05:50:52.476288 1437114 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-1142328/.minikube/key.pem, removing ...
	I1209 05:50:52.476322 1437114 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-1142328/.minikube/key.pem
	I1209 05:50:52.476364 1437114 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22081-1142328/.minikube/key.pem (1675 bytes)
	I1209 05:50:52.476440 1437114 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca-key.pem org=jenkins.newest-cni-262540 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-262540]
	I1209 05:50:52.561012 1437114 provision.go:177] copyRemoteCerts
	I1209 05:50:52.561084 1437114 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 05:50:52.561133 1437114 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-262540
	I1209 05:50:52.578674 1437114 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34215 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/newest-cni-262540/id_rsa Username:docker}
	I1209 05:50:52.685758 1437114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1209 05:50:52.702408 1437114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1209 05:50:52.719173 1437114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1671 bytes)
	I1209 05:50:52.736435 1437114 provision.go:87] duration metric: took 284.081054ms to configureAuth
	I1209 05:50:52.736462 1437114 ubuntu.go:206] setting minikube options for container-runtime
	I1209 05:50:52.736672 1437114 config.go:182] Loaded profile config "newest-cni-262540": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1209 05:50:52.736698 1437114 machine.go:97] duration metric: took 3.816733312s to provisionDockerMachine
	I1209 05:50:52.736707 1437114 start.go:293] postStartSetup for "newest-cni-262540" (driver="docker")
	I1209 05:50:52.736719 1437114 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 05:50:52.736771 1437114 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 05:50:52.736819 1437114 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-262540
	I1209 05:50:52.753733 1437114 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34215 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/newest-cni-262540/id_rsa Username:docker}
	I1209 05:50:52.859644 1437114 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 05:50:52.862806 1437114 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1209 05:50:52.862830 1437114 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1209 05:50:52.862841 1437114 filesync.go:126] Scanning /home/jenkins/minikube-integration/22081-1142328/.minikube/addons for local assets ...
	I1209 05:50:52.862893 1437114 filesync.go:126] Scanning /home/jenkins/minikube-integration/22081-1142328/.minikube/files for local assets ...
	I1209 05:50:52.862974 1437114 filesync.go:149] local asset: /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/ssl/certs/11442312.pem -> 11442312.pem in /etc/ssl/certs
	I1209 05:50:52.863076 1437114 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 05:50:52.870063 1437114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/ssl/certs/11442312.pem --> /etc/ssl/certs/11442312.pem (1708 bytes)
	I1209 05:50:52.886852 1437114 start.go:296] duration metric: took 150.129481ms for postStartSetup
	I1209 05:50:52.886932 1437114 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1209 05:50:52.887020 1437114 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-262540
	I1209 05:50:52.904086 1437114 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34215 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/newest-cni-262540/id_rsa Username:docker}
	I1209 05:50:53.006063 1437114 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1209 05:50:53.011716 1437114 fix.go:56] duration metric: took 4.426170276s for fixHost
	I1209 05:50:53.011745 1437114 start.go:83] releasing machines lock for "newest-cni-262540", held for 4.426228294s
	I1209 05:50:53.011812 1437114 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-262540
	I1209 05:50:53.028468 1437114 ssh_runner.go:195] Run: cat /version.json
	I1209 05:50:53.028532 1437114 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-262540
	I1209 05:50:53.028815 1437114 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 05:50:53.028886 1437114 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-262540
	I1209 05:50:53.050698 1437114 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34215 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/newest-cni-262540/id_rsa Username:docker}
	I1209 05:50:53.061651 1437114 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34215 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/newest-cni-262540/id_rsa Username:docker}
	I1209 05:50:53.151708 1437114 ssh_runner.go:195] Run: systemctl --version
	I1209 05:50:53.249572 1437114 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 05:50:53.254184 1437114 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 05:50:53.254256 1437114 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 05:50:53.261725 1437114 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1209 05:50:53.261749 1437114 start.go:496] detecting cgroup driver to use...
	I1209 05:50:53.261780 1437114 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1209 05:50:53.261828 1437114 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1209 05:50:53.278531 1437114 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1209 05:50:53.291190 1437114 docker.go:218] disabling cri-docker service (if available) ...
	I1209 05:50:53.291252 1437114 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 05:50:53.306525 1437114 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 05:50:53.319477 1437114 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 05:50:53.424347 1437114 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 05:50:53.539911 1437114 docker.go:234] disabling docker service ...
	I1209 05:50:53.540005 1437114 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 05:50:53.555506 1437114 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 05:50:53.568379 1437114 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 05:50:53.684143 1437114 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 05:50:53.819865 1437114 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 05:50:53.834400 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 05:50:53.848555 1437114 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1209 05:50:53.857346 1437114 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1209 05:50:53.866232 1437114 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1209 05:50:53.866362 1437114 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1209 05:50:53.875141 1437114 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1209 05:50:53.883775 1437114 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1209 05:50:53.892743 1437114 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1209 05:50:53.901606 1437114 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 05:50:53.909694 1437114 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1209 05:50:53.918469 1437114 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1209 05:50:53.927272 1437114 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1209 05:50:53.939275 1437114 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 05:50:53.948029 1437114 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 05:50:53.956257 1437114 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 05:50:54.075166 1437114 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1209 05:50:54.195479 1437114 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1209 05:50:54.195546 1437114 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1209 05:50:54.199412 1437114 start.go:564] Will wait 60s for crictl version
	I1209 05:50:54.199478 1437114 ssh_runner.go:195] Run: which crictl
	I1209 05:50:54.203349 1437114 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1209 05:50:54.229036 1437114 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1209 05:50:54.229147 1437114 ssh_runner.go:195] Run: containerd --version
	I1209 05:50:54.257755 1437114 ssh_runner.go:195] Run: containerd --version
	I1209 05:50:54.281890 1437114 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	W1209 05:50:50.184270 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:50:52.684275 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	I1209 05:50:54.284780 1437114 cli_runner.go:164] Run: docker network inspect newest-cni-262540 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1209 05:50:54.300458 1437114 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1209 05:50:54.304227 1437114 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 05:50:54.316829 1437114 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1209 05:50:54.319602 1437114 kubeadm.go:884] updating cluster {Name:newest-cni-262540 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-262540 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 05:50:54.319761 1437114 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1209 05:50:54.319850 1437114 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 05:50:54.344882 1437114 containerd.go:627] all images are preloaded for containerd runtime.
	I1209 05:50:54.344907 1437114 containerd.go:534] Images already preloaded, skipping extraction
	I1209 05:50:54.344969 1437114 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 05:50:54.368351 1437114 containerd.go:627] all images are preloaded for containerd runtime.
	I1209 05:50:54.368375 1437114 cache_images.go:86] Images are preloaded, skipping loading
	I1209 05:50:54.368384 1437114 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1209 05:50:54.368487 1437114 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-262540 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-262540 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 05:50:54.368554 1437114 ssh_runner.go:195] Run: sudo crictl info
	I1209 05:50:54.396480 1437114 cni.go:84] Creating CNI manager for ""
	I1209 05:50:54.396505 1437114 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1209 05:50:54.396527 1437114 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1209 05:50:54.396551 1437114 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-262540 NodeName:newest-cni-262540 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1209 05:50:54.396668 1437114 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-262540"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 05:50:54.396755 1437114 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1209 05:50:54.404357 1437114 binaries.go:51] Found k8s binaries, skipping transfer
	I1209 05:50:54.404462 1437114 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1209 05:50:54.411829 1437114 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1209 05:50:54.423915 1437114 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1209 05:50:54.436484 1437114 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1209 05:50:54.448905 1437114 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1209 05:50:54.452398 1437114 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 05:50:54.461840 1437114 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 05:50:54.574379 1437114 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 05:50:54.590263 1437114 certs.go:69] Setting up /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540 for IP: 192.168.76.2
	I1209 05:50:54.590332 1437114 certs.go:195] generating shared ca certs ...
	I1209 05:50:54.590364 1437114 certs.go:227] acquiring lock for ca certs: {Name:mk15788702f8c4e23b5aeab3f44961d296fab259 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 05:50:54.590561 1437114 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.key
	I1209 05:50:54.590652 1437114 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/proxy-client-ca.key
	I1209 05:50:54.590688 1437114 certs.go:257] generating profile certs ...
	I1209 05:50:54.590838 1437114 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/client.key
	I1209 05:50:54.590942 1437114 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/apiserver.key.0ed49b31
	I1209 05:50:54.591051 1437114 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/proxy-client.key
	I1209 05:50:54.591210 1437114 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/1144231.pem (1338 bytes)
	W1209 05:50:54.591287 1437114 certs.go:480] ignoring /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/1144231_empty.pem, impossibly tiny 0 bytes
	I1209 05:50:54.591314 1437114 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca-key.pem (1679 bytes)
	I1209 05:50:54.591380 1437114 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem (1078 bytes)
	I1209 05:50:54.591442 1437114 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/cert.pem (1123 bytes)
	I1209 05:50:54.591490 1437114 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/key.pem (1675 bytes)
	I1209 05:50:54.591576 1437114 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/ssl/certs/11442312.pem (1708 bytes)
	I1209 05:50:54.592436 1437114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 05:50:54.617399 1437114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1209 05:50:54.636943 1437114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 05:50:54.658494 1437114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1209 05:50:54.674958 1437114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1209 05:50:54.701134 1437114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1209 05:50:54.720347 1437114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 05:50:54.738904 1437114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1209 05:50:54.758253 1437114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/1144231.pem --> /usr/share/ca-certificates/1144231.pem (1338 bytes)
	I1209 05:50:54.775204 1437114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/ssl/certs/11442312.pem --> /usr/share/ca-certificates/11442312.pem (1708 bytes)
	I1209 05:50:54.791963 1437114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 05:50:54.809403 1437114 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 05:50:54.821958 1437114 ssh_runner.go:195] Run: openssl version
	I1209 05:50:54.828113 1437114 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1209 05:50:54.835305 1437114 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1209 05:50:54.842458 1437114 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 05:50:54.846155 1437114 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 04:09 /usr/share/ca-certificates/minikubeCA.pem
	I1209 05:50:54.846222 1437114 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 05:50:54.887330 1437114 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1209 05:50:54.894630 1437114 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1144231.pem
	I1209 05:50:54.901722 1437114 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1144231.pem /etc/ssl/certs/1144231.pem
	I1209 05:50:54.909025 1437114 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1144231.pem
	I1209 05:50:54.912514 1437114 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 04:18 /usr/share/ca-certificates/1144231.pem
	I1209 05:50:54.912621 1437114 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1144231.pem
	I1209 05:50:54.953649 1437114 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1209 05:50:54.960781 1437114 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/11442312.pem
	I1209 05:50:54.967822 1437114 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/11442312.pem /etc/ssl/certs/11442312.pem
	I1209 05:50:54.975177 1437114 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11442312.pem
	I1209 05:50:54.978699 1437114 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 04:18 /usr/share/ca-certificates/11442312.pem
	I1209 05:50:54.978782 1437114 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11442312.pem
	I1209 05:50:55.020640 1437114 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1209 05:50:55.034989 1437114 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 05:50:55.043885 1437114 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1209 05:50:55.090059 1437114 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1209 05:50:55.134954 1437114 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1209 05:50:55.180095 1437114 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1209 05:50:55.223090 1437114 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1209 05:50:55.265103 1437114 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1209 05:50:55.306238 1437114 kubeadm.go:401] StartCluster: {Name:newest-cni-262540 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-262540 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 05:50:55.306348 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1209 05:50:55.306413 1437114 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 05:50:55.335032 1437114 cri.go:89] found id: ""
	I1209 05:50:55.335115 1437114 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1209 05:50:55.355619 1437114 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1209 05:50:55.355640 1437114 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1209 05:50:55.355691 1437114 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1209 05:50:55.363844 1437114 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1209 05:50:55.364433 1437114 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-262540" does not appear in /home/jenkins/minikube-integration/22081-1142328/kubeconfig
	I1209 05:50:55.364754 1437114 kubeconfig.go:62] /home/jenkins/minikube-integration/22081-1142328/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-262540" cluster setting kubeconfig missing "newest-cni-262540" context setting]
	I1209 05:50:55.365251 1437114 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1142328/kubeconfig: {Name:mk0dc127429f88dc4fdfb2d110deebc58207c1b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 05:50:55.366765 1437114 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1209 05:50:55.375221 1437114 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1209 05:50:55.375252 1437114 kubeadm.go:602] duration metric: took 19.605753ms to restartPrimaryControlPlane
	I1209 05:50:55.375261 1437114 kubeadm.go:403] duration metric: took 69.033781ms to StartCluster
	I1209 05:50:55.375276 1437114 settings.go:142] acquiring lock: {Name:mk8fa744e3d74bf8a1cbf5ac275c9f1969ad91a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 05:50:55.375345 1437114 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22081-1142328/kubeconfig
	I1209 05:50:55.376265 1437114 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1142328/kubeconfig: {Name:mk0dc127429f88dc4fdfb2d110deebc58207c1b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 05:50:55.376705 1437114 config.go:182] Loaded profile config "newest-cni-262540": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1209 05:50:55.376504 1437114 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1209 05:50:55.376810 1437114 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1209 05:50:55.377093 1437114 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-262540"
	I1209 05:50:55.377111 1437114 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-262540"
	I1209 05:50:55.377136 1437114 host.go:66] Checking if "newest-cni-262540" exists ...
	I1209 05:50:55.377594 1437114 cli_runner.go:164] Run: docker container inspect newest-cni-262540 --format={{.State.Status}}
	I1209 05:50:55.377785 1437114 addons.go:70] Setting dashboard=true in profile "newest-cni-262540"
	I1209 05:50:55.377813 1437114 addons.go:239] Setting addon dashboard=true in "newest-cni-262540"
	W1209 05:50:55.377825 1437114 addons.go:248] addon dashboard should already be in state true
	I1209 05:50:55.377849 1437114 host.go:66] Checking if "newest-cni-262540" exists ...
	I1209 05:50:55.378304 1437114 cli_runner.go:164] Run: docker container inspect newest-cni-262540 --format={{.State.Status}}
	I1209 05:50:55.378820 1437114 addons.go:70] Setting default-storageclass=true in profile "newest-cni-262540"
	I1209 05:50:55.378864 1437114 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-262540"
	I1209 05:50:55.379212 1437114 cli_runner.go:164] Run: docker container inspect newest-cni-262540 --format={{.State.Status}}
	I1209 05:50:55.381896 1437114 out.go:179] * Verifying Kubernetes components...
	I1209 05:50:55.388614 1437114 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 05:50:55.438264 1437114 addons.go:239] Setting addon default-storageclass=true in "newest-cni-262540"
	I1209 05:50:55.438303 1437114 host.go:66] Checking if "newest-cni-262540" exists ...
	I1209 05:50:55.438728 1437114 cli_runner.go:164] Run: docker container inspect newest-cni-262540 --format={{.State.Status}}
	I1209 05:50:55.440785 1437114 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 05:50:55.442715 1437114 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 05:50:55.442743 1437114 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1209 05:50:55.442806 1437114 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-262540
	I1209 05:50:55.442947 1437114 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1209 05:50:55.445621 1437114 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1209 05:50:55.449877 1437114 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1209 05:50:55.449904 1437114 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1209 05:50:55.449976 1437114 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-262540
	I1209 05:50:55.481759 1437114 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34215 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/newest-cni-262540/id_rsa Username:docker}
	I1209 05:50:55.496417 1437114 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1209 05:50:55.496440 1437114 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1209 05:50:55.496499 1437114 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-262540
	I1209 05:50:55.515362 1437114 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34215 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/newest-cni-262540/id_rsa Username:docker}
	I1209 05:50:55.537402 1437114 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34215 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/newest-cni-262540/id_rsa Username:docker}
	I1209 05:50:55.642792 1437114 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 05:50:55.677774 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 05:50:55.711653 1437114 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1209 05:50:55.711691 1437114 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1209 05:50:55.713691 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1209 05:50:55.771340 1437114 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1209 05:50:55.771368 1437114 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1209 05:50:55.785331 1437114 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1209 05:50:55.785403 1437114 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1209 05:50:55.798961 1437114 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1209 05:50:55.798984 1437114 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1209 05:50:55.811558 1437114 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1209 05:50:55.811625 1437114 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1209 05:50:55.824010 1437114 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1209 05:50:55.824113 1437114 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1209 05:50:55.836722 1437114 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1209 05:50:55.836745 1437114 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1209 05:50:55.849061 1437114 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1209 05:50:55.849126 1437114 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1209 05:50:55.862091 1437114 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1209 05:50:55.862114 1437114 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1209 05:50:55.875010 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1209 05:50:56.435552 1437114 api_server.go:52] waiting for apiserver process to appear ...
	W1209 05:50:56.435748 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:50:56.435801 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:50:56.435838 1437114 retry.go:31] will retry after 228.095144ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 05:50:56.435700 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:50:56.435898 1437114 retry.go:31] will retry after 361.053359ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 05:50:56.436142 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:50:56.436189 1437114 retry.go:31] will retry after 212.683869ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:50:56.649580 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1209 05:50:56.665010 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1209 05:50:56.729564 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:50:56.729662 1437114 retry.go:31] will retry after 263.201205ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 05:50:56.751560 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:50:56.751590 1437114 retry.go:31] will retry after 282.08987ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:50:56.797828 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1209 05:50:56.855489 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:50:56.855525 1437114 retry.go:31] will retry after 519.882573ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:50:56.936655 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:50:56.993111 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1209 05:50:57.034512 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1209 05:50:57.059780 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:50:57.059861 1437114 retry.go:31] will retry after 724.517068ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 05:50:57.095702 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:50:57.095733 1437114 retry.go:31] will retry after 773.591416ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:50:57.376312 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1209 05:50:57.435557 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:50:57.435589 1437114 retry.go:31] will retry after 453.196958ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:50:57.436773 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:50:57.784620 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1209 05:50:57.844755 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:50:57.844791 1437114 retry.go:31] will retry after 1.262011023s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:50:57.869923 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 05:50:57.889536 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 05:50:57.936212 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1209 05:50:57.961431 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:50:57.961468 1437114 retry.go:31] will retry after 546.501311ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 05:50:58.032466 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:50:58.032501 1437114 retry.go:31] will retry after 1.229436669s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 05:50:54.684397 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:50:57.184110 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:50:59.184561 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	I1209 05:50:58.436310 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:50:58.508935 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1209 05:50:58.565163 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:50:58.565196 1437114 retry.go:31] will retry after 1.407912766s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:50:58.936676 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:50:59.107417 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1209 05:50:59.166291 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:50:59.166364 1437114 retry.go:31] will retry after 928.374807ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:50:59.262572 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1209 05:50:59.321942 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:50:59.321975 1437114 retry.go:31] will retry after 837.961471ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:50:59.436172 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:50:59.936839 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:50:59.973278 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 05:51:00.094961 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1209 05:51:00.122388 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:00.122508 1437114 retry.go:31] will retry after 2.37581771s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:00.163516 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1209 05:51:00.369038 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:00.369122 1437114 retry.go:31] will retry after 1.02409357s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 05:51:00.430845 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:00.430881 1437114 retry.go:31] will retry after 1.008529781s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:00.435975 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:00.935928 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:01.393811 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1209 05:51:01.436520 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:01.440060 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1209 05:51:01.479948 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:01.480008 1437114 retry.go:31] will retry after 3.887040249s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 05:51:01.521362 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:01.521394 1437114 retry.go:31] will retry after 2.488257731s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:01.936891 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:02.436059 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:02.499505 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1209 05:51:02.558807 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:02.558839 1437114 retry.go:31] will retry after 1.68559081s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:02.936227 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1209 05:51:01.683581 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:51:04.183570 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	I1209 05:51:03.436252 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:03.936492 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:04.009914 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1209 05:51:04.068567 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:04.068604 1437114 retry.go:31] will retry after 3.558332748s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:04.244680 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1209 05:51:04.309239 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:04.309330 1437114 retry.go:31] will retry after 5.213787505s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:04.436559 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:04.936651 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:05.367810 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1209 05:51:05.433548 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:05.433586 1437114 retry.go:31] will retry after 5.477878375s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:05.436872 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:05.936073 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:06.436593 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:06.936543 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:07.436871 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:07.628150 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1209 05:51:07.690629 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:07.690661 1437114 retry.go:31] will retry after 6.157660473s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:07.935908 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1209 05:51:06.183630 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:51:08.683544 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	I1209 05:51:08.436122 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:08.935959 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:09.436970 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:09.523671 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1209 05:51:09.581839 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:09.581914 1437114 retry.go:31] will retry after 9.601279523s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:09.936233 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:10.436178 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:10.911744 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1209 05:51:10.936618 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1209 05:51:11.040149 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:11.040187 1437114 retry.go:31] will retry after 9.211684326s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:11.436896 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:11.936862 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:12.435946 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:12.936781 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1209 05:51:10.683655 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:51:12.684274 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	I1209 05:51:13.436827 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:13.848647 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1209 05:51:13.909374 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:13.909406 1437114 retry.go:31] will retry after 5.044533036s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:13.936521 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:14.436557 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:14.935977 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:15.436310 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:15.936335 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:16.436628 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:16.936535 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:17.436311 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:17.935962 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1209 05:51:15.183508 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:51:17.183575 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:51:19.184498 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	I1209 05:51:18.435898 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:18.936142 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:18.955073 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1209 05:51:19.020072 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:19.020104 1437114 retry.go:31] will retry after 11.951102235s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:19.184688 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1209 05:51:19.284505 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:19.284538 1437114 retry.go:31] will retry after 12.030085055s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:19.435928 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:19.936763 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:20.252740 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1209 05:51:20.316752 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:20.316784 1437114 retry.go:31] will retry after 7.019613017s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:20.436227 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:20.936875 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:21.435907 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:21.935963 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:22.436158 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:22.936474 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1209 05:51:21.683564 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:51:23.683626 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:51:26.184579 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	I1209 05:51:27.683214 1429857 node_ready.go:38] duration metric: took 6m0.000146062s for node "no-preload-842269" to be "Ready" ...
	I1209 05:51:27.686512 1429857 out.go:203] 
	W1209 05:51:27.689522 1429857 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1209 05:51:27.689540 1429857 out.go:285] * 
	W1209 05:51:27.691657 1429857 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 05:51:27.694499 1429857 out.go:203] 
	I1209 05:51:23.436353 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:23.936003 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:24.435917 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:24.936039 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:25.435883 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:25.936680 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:26.436359 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:26.936582 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:27.336866 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1209 05:51:27.401213 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:27.401248 1437114 retry.go:31] will retry after 15.185111317s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:27.436540 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:27.936409 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:28.436146 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:28.936943 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:29.435893 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:29.936169 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:30.435922 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:30.936805 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:30.972257 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1209 05:51:31.030985 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:31.031019 1437114 retry.go:31] will retry after 20.454574576s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:31.315422 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1209 05:51:31.375282 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:31.375315 1437114 retry.go:31] will retry after 20.731698158s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:31.436402 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:31.936683 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:32.436139 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:32.936168 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:33.436458 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:33.936647 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:34.435986 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:34.935949 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:35.436254 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:35.936501 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:36.436171 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:36.936413 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:37.436503 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:37.936112 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:38.436260 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:38.936155 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:39.435919 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:39.935963 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:40.435931 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:40.936251 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:41.435937 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:41.936193 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:42.436356 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:42.587277 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1209 05:51:42.649100 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:42.649137 1437114 retry.go:31] will retry after 20.728553891s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:42.936771 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:43.435958 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:43.936674 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:44.436708 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:44.936177 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:45.436620 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:45.936616 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:46.436000 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:46.936141 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:47.435976 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:47.936139 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:48.436162 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:48.936736 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:49.436154 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:49.936192 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:50.436517 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:50.936806 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:51.436499 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:51.485950 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1209 05:51:51.548585 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:51.548614 1437114 retry.go:31] will retry after 47.596790172s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:51.936087 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:52.108051 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1209 05:51:52.167486 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:52.167519 1437114 retry.go:31] will retry after 29.777424896s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:52.436906 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:52.936203 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:53.436751 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:53.936576 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:54.436593 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:54.935988 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:55.436246 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:51:55.436382 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:51:55.467996 1437114 cri.go:89] found id: ""
	I1209 05:51:55.468084 1437114 logs.go:282] 0 containers: []
	W1209 05:51:55.468107 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:51:55.468125 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:51:55.468223 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:51:55.504401 1437114 cri.go:89] found id: ""
	I1209 05:51:55.504427 1437114 logs.go:282] 0 containers: []
	W1209 05:51:55.504434 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:51:55.504440 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:51:55.504513 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:51:55.530581 1437114 cri.go:89] found id: ""
	I1209 05:51:55.530606 1437114 logs.go:282] 0 containers: []
	W1209 05:51:55.530615 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:51:55.530621 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:51:55.530689 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:51:55.555637 1437114 cri.go:89] found id: ""
	I1209 05:51:55.555708 1437114 logs.go:282] 0 containers: []
	W1209 05:51:55.555744 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:51:55.555768 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:51:55.555867 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:51:55.582108 1437114 cri.go:89] found id: ""
	I1209 05:51:55.582132 1437114 logs.go:282] 0 containers: []
	W1209 05:51:55.582141 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:51:55.582148 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:51:55.582242 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:51:55.606067 1437114 cri.go:89] found id: ""
	I1209 05:51:55.606092 1437114 logs.go:282] 0 containers: []
	W1209 05:51:55.606101 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:51:55.606119 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:51:55.606179 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:51:55.632387 1437114 cri.go:89] found id: ""
	I1209 05:51:55.632413 1437114 logs.go:282] 0 containers: []
	W1209 05:51:55.632422 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:51:55.632428 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:51:55.632489 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:51:55.657181 1437114 cri.go:89] found id: ""
	I1209 05:51:55.657207 1437114 logs.go:282] 0 containers: []
	W1209 05:51:55.657215 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:51:55.657224 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:51:55.657236 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:51:55.718829 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:51:55.710893    1841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:51:55.711561    1841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:51:55.713071    1841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:51:55.713520    1841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:51:55.714997    1841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:51:55.710893    1841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:51:55.711561    1841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:51:55.713071    1841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:51:55.713520    1841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:51:55.714997    1841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:51:55.718849 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:51:55.718861 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:51:55.745044 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:51:55.745076 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:51:55.779273 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:51:55.779300 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:51:55.836724 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:51:55.836759 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:51:58.354526 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:58.364806 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:51:58.364873 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:51:58.394168 1437114 cri.go:89] found id: ""
	I1209 05:51:58.394193 1437114 logs.go:282] 0 containers: []
	W1209 05:51:58.394201 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:51:58.394213 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:51:58.394269 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:51:58.419742 1437114 cri.go:89] found id: ""
	I1209 05:51:58.419776 1437114 logs.go:282] 0 containers: []
	W1209 05:51:58.419785 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:51:58.419792 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:51:58.419859 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:51:58.464612 1437114 cri.go:89] found id: ""
	I1209 05:51:58.464637 1437114 logs.go:282] 0 containers: []
	W1209 05:51:58.464646 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:51:58.464652 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:51:58.464707 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:51:58.496121 1437114 cri.go:89] found id: ""
	I1209 05:51:58.496148 1437114 logs.go:282] 0 containers: []
	W1209 05:51:58.496157 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:51:58.496163 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:51:58.496259 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:51:58.520390 1437114 cri.go:89] found id: ""
	I1209 05:51:58.520429 1437114 logs.go:282] 0 containers: []
	W1209 05:51:58.520439 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:51:58.520452 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:51:58.520531 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:51:58.546795 1437114 cri.go:89] found id: ""
	I1209 05:51:58.546828 1437114 logs.go:282] 0 containers: []
	W1209 05:51:58.546838 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:51:58.546847 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:51:58.546911 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:51:58.570252 1437114 cri.go:89] found id: ""
	I1209 05:51:58.570279 1437114 logs.go:282] 0 containers: []
	W1209 05:51:58.570289 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:51:58.570295 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:51:58.570359 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:51:58.594153 1437114 cri.go:89] found id: ""
	I1209 05:51:58.594178 1437114 logs.go:282] 0 containers: []
	W1209 05:51:58.594187 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:51:58.594195 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:51:58.594207 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:51:58.621218 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:51:58.621244 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:51:58.675840 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:51:58.675877 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:51:58.691699 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:51:58.691734 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:51:58.755150 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:51:58.747260    1968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:51:58.747839    1968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:51:58.749288    1968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:51:58.749743    1968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:51:58.751180    1968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:51:58.747260    1968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:51:58.747839    1968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:51:58.749288    1968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:51:58.749743    1968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:51:58.751180    1968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:51:58.755171 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:51:58.755185 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:52:01.281475 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:52:01.293255 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:52:01.293329 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:52:01.318701 1437114 cri.go:89] found id: ""
	I1209 05:52:01.318740 1437114 logs.go:282] 0 containers: []
	W1209 05:52:01.318749 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:52:01.318757 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:52:01.318827 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:52:01.343120 1437114 cri.go:89] found id: ""
	I1209 05:52:01.343145 1437114 logs.go:282] 0 containers: []
	W1209 05:52:01.343154 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:52:01.343170 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:52:01.343228 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:52:01.367699 1437114 cri.go:89] found id: ""
	I1209 05:52:01.367725 1437114 logs.go:282] 0 containers: []
	W1209 05:52:01.367733 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:52:01.367749 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:52:01.367823 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:52:01.394578 1437114 cri.go:89] found id: ""
	I1209 05:52:01.394603 1437114 logs.go:282] 0 containers: []
	W1209 05:52:01.394612 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:52:01.394618 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:52:01.394677 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:52:01.423264 1437114 cri.go:89] found id: ""
	I1209 05:52:01.423290 1437114 logs.go:282] 0 containers: []
	W1209 05:52:01.423299 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:52:01.423305 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:52:01.423367 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:52:01.460737 1437114 cri.go:89] found id: ""
	I1209 05:52:01.460764 1437114 logs.go:282] 0 containers: []
	W1209 05:52:01.460772 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:52:01.460778 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:52:01.460850 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:52:01.493246 1437114 cri.go:89] found id: ""
	I1209 05:52:01.493272 1437114 logs.go:282] 0 containers: []
	W1209 05:52:01.493281 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:52:01.493287 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:52:01.493364 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:52:01.517585 1437114 cri.go:89] found id: ""
	I1209 05:52:01.517612 1437114 logs.go:282] 0 containers: []
	W1209 05:52:01.517620 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:52:01.517630 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:52:01.517670 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:52:01.579907 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:52:01.571951    2063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:01.572467    2063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:01.574150    2063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:01.574485    2063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:01.575978    2063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:52:01.571951    2063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:01.572467    2063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:01.574150    2063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:01.574485    2063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:01.575978    2063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:52:01.579934 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:52:01.579951 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:52:01.605933 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:52:01.605968 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:52:01.633450 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:52:01.633476 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:52:01.690768 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:52:01.690809 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:52:03.378312 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1209 05:52:03.443761 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:52:03.443892 1437114 retry.go:31] will retry after 46.030372913s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:52:04.208154 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:52:04.218947 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:52:04.219023 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:52:04.250185 1437114 cri.go:89] found id: ""
	I1209 05:52:04.250210 1437114 logs.go:282] 0 containers: []
	W1209 05:52:04.250219 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:52:04.250226 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:52:04.250336 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:52:04.278437 1437114 cri.go:89] found id: ""
	I1209 05:52:04.278462 1437114 logs.go:282] 0 containers: []
	W1209 05:52:04.278471 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:52:04.278477 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:52:04.278540 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:52:04.306148 1437114 cri.go:89] found id: ""
	I1209 05:52:04.306212 1437114 logs.go:282] 0 containers: []
	W1209 05:52:04.306227 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:52:04.306235 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:52:04.306294 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:52:04.330968 1437114 cri.go:89] found id: ""
	I1209 05:52:04.330995 1437114 logs.go:282] 0 containers: []
	W1209 05:52:04.331003 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:52:04.331014 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:52:04.331074 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:52:04.361139 1437114 cri.go:89] found id: ""
	I1209 05:52:04.361213 1437114 logs.go:282] 0 containers: []
	W1209 05:52:04.361228 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:52:04.361235 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:52:04.361292 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:52:04.384663 1437114 cri.go:89] found id: ""
	I1209 05:52:04.384728 1437114 logs.go:282] 0 containers: []
	W1209 05:52:04.384744 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:52:04.384751 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:52:04.384819 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:52:04.409163 1437114 cri.go:89] found id: ""
	I1209 05:52:04.409188 1437114 logs.go:282] 0 containers: []
	W1209 05:52:04.409196 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:52:04.409202 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:52:04.409260 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:52:04.438875 1437114 cri.go:89] found id: ""
	I1209 05:52:04.438901 1437114 logs.go:282] 0 containers: []
	W1209 05:52:04.438911 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:52:04.438920 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:52:04.438930 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:52:04.504081 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:52:04.504118 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:52:04.520282 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:52:04.520314 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:52:04.582173 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:52:04.574497    2189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:04.575080    2189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:04.576516    2189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:04.576898    2189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:04.578287    2189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:52:04.574497    2189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:04.575080    2189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:04.576516    2189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:04.576898    2189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:04.578287    2189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:52:04.582197 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:52:04.582209 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:52:04.607423 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:52:04.607456 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:52:07.139347 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:52:07.149801 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:52:07.149872 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:52:07.174952 1437114 cri.go:89] found id: ""
	I1209 05:52:07.174980 1437114 logs.go:282] 0 containers: []
	W1209 05:52:07.174988 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:52:07.174995 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:52:07.175054 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:52:07.202325 1437114 cri.go:89] found id: ""
	I1209 05:52:07.202387 1437114 logs.go:282] 0 containers: []
	W1209 05:52:07.202418 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:52:07.202437 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:52:07.202533 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:52:07.232008 1437114 cri.go:89] found id: ""
	I1209 05:52:07.232092 1437114 logs.go:282] 0 containers: []
	W1209 05:52:07.232147 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:52:07.232170 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:52:07.232265 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:52:07.259048 1437114 cri.go:89] found id: ""
	I1209 05:52:07.259075 1437114 logs.go:282] 0 containers: []
	W1209 05:52:07.259084 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:52:07.259091 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:52:07.259147 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:52:07.283135 1437114 cri.go:89] found id: ""
	I1209 05:52:07.283161 1437114 logs.go:282] 0 containers: []
	W1209 05:52:07.283169 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:52:07.283175 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:52:07.283285 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:52:07.307259 1437114 cri.go:89] found id: ""
	I1209 05:52:07.307285 1437114 logs.go:282] 0 containers: []
	W1209 05:52:07.307294 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:52:07.307300 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:52:07.307357 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:52:07.331534 1437114 cri.go:89] found id: ""
	I1209 05:52:07.331604 1437114 logs.go:282] 0 containers: []
	W1209 05:52:07.331627 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:52:07.331645 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:52:07.331742 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:52:07.358525 1437114 cri.go:89] found id: ""
	I1209 05:52:07.358548 1437114 logs.go:282] 0 containers: []
	W1209 05:52:07.358557 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:52:07.358565 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:52:07.358577 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:52:07.424932 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:52:07.417064    2295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:07.417623    2295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:07.419222    2295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:07.419698    2295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:07.421122    2295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:52:07.417064    2295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:07.417623    2295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:07.419222    2295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:07.419698    2295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:07.421122    2295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:52:07.425003 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:52:07.425028 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:52:07.452549 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:52:07.452633 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:52:07.488600 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:52:07.488675 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:52:07.547568 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:52:07.547604 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:52:10.063961 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:52:10.075421 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:52:10.075510 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:52:10.106279 1437114 cri.go:89] found id: ""
	I1209 05:52:10.106307 1437114 logs.go:282] 0 containers: []
	W1209 05:52:10.106317 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:52:10.106323 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:52:10.106395 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:52:10.140825 1437114 cri.go:89] found id: ""
	I1209 05:52:10.140865 1437114 logs.go:282] 0 containers: []
	W1209 05:52:10.140874 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:52:10.140881 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:52:10.140961 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:52:10.166337 1437114 cri.go:89] found id: ""
	I1209 05:52:10.166364 1437114 logs.go:282] 0 containers: []
	W1209 05:52:10.166373 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:52:10.166380 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:52:10.166460 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:52:10.202390 1437114 cri.go:89] found id: ""
	I1209 05:52:10.202417 1437114 logs.go:282] 0 containers: []
	W1209 05:52:10.202426 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:52:10.202432 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:52:10.202541 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:52:10.230690 1437114 cri.go:89] found id: ""
	I1209 05:52:10.230716 1437114 logs.go:282] 0 containers: []
	W1209 05:52:10.230726 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:52:10.230733 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:52:10.230847 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:52:10.257345 1437114 cri.go:89] found id: ""
	I1209 05:52:10.257371 1437114 logs.go:282] 0 containers: []
	W1209 05:52:10.257380 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:52:10.257386 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:52:10.257452 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:52:10.282028 1437114 cri.go:89] found id: ""
	I1209 05:52:10.282053 1437114 logs.go:282] 0 containers: []
	W1209 05:52:10.282062 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:52:10.282069 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:52:10.282136 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:52:10.306484 1437114 cri.go:89] found id: ""
	I1209 05:52:10.306509 1437114 logs.go:282] 0 containers: []
	W1209 05:52:10.306519 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:52:10.306538 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:52:10.306550 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:52:10.334032 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:52:10.334059 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:52:10.396200 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:52:10.396241 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:52:10.412481 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:52:10.412513 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:52:10.512214 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:52:10.503459    2422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:10.504106    2422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:10.505795    2422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:10.506184    2422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:10.507800    2422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:52:10.503459    2422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:10.504106    2422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:10.505795    2422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:10.506184    2422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:10.507800    2422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:52:10.512237 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:52:10.512250 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:52:13.038285 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:52:13.048783 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:52:13.048856 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:52:13.073147 1437114 cri.go:89] found id: ""
	I1209 05:52:13.073174 1437114 logs.go:282] 0 containers: []
	W1209 05:52:13.073182 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:52:13.073189 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:52:13.073264 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:52:13.096887 1437114 cri.go:89] found id: ""
	I1209 05:52:13.096911 1437114 logs.go:282] 0 containers: []
	W1209 05:52:13.096919 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:52:13.096926 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:52:13.096983 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:52:13.120441 1437114 cri.go:89] found id: ""
	I1209 05:52:13.120466 1437114 logs.go:282] 0 containers: []
	W1209 05:52:13.120475 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:52:13.120482 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:52:13.120540 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:52:13.144403 1437114 cri.go:89] found id: ""
	I1209 05:52:13.144478 1437114 logs.go:282] 0 containers: []
	W1209 05:52:13.144494 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:52:13.144504 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:52:13.144576 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:52:13.168584 1437114 cri.go:89] found id: ""
	I1209 05:52:13.168610 1437114 logs.go:282] 0 containers: []
	W1209 05:52:13.168619 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:52:13.168626 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:52:13.168683 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:52:13.204797 1437114 cri.go:89] found id: ""
	I1209 05:52:13.204824 1437114 logs.go:282] 0 containers: []
	W1209 05:52:13.204833 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:52:13.204840 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:52:13.204899 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:52:13.231178 1437114 cri.go:89] found id: ""
	I1209 05:52:13.231205 1437114 logs.go:282] 0 containers: []
	W1209 05:52:13.231214 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:52:13.231220 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:52:13.231278 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:52:13.260307 1437114 cri.go:89] found id: ""
	I1209 05:52:13.260331 1437114 logs.go:282] 0 containers: []
	W1209 05:52:13.260341 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:52:13.260350 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:52:13.260361 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:52:13.286145 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:52:13.286182 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:52:13.315119 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:52:13.315147 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:52:13.369862 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:52:13.369894 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:52:13.385795 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:52:13.385822 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:52:13.451305 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:52:13.443201    2536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:13.444044    2536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:13.445720    2536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:13.446006    2536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:13.447466    2536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:52:13.443201    2536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:13.444044    2536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:13.445720    2536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:13.446006    2536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:13.447466    2536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:52:15.952193 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:52:15.962440 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:52:15.962511 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:52:15.990421 1437114 cri.go:89] found id: ""
	I1209 05:52:15.990444 1437114 logs.go:282] 0 containers: []
	W1209 05:52:15.990452 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:52:15.990459 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:52:15.990527 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:52:16.025731 1437114 cri.go:89] found id: ""
	I1209 05:52:16.025759 1437114 logs.go:282] 0 containers: []
	W1209 05:52:16.025768 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:52:16.025775 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:52:16.025850 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:52:16.051150 1437114 cri.go:89] found id: ""
	I1209 05:52:16.051184 1437114 logs.go:282] 0 containers: []
	W1209 05:52:16.051193 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:52:16.051199 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:52:16.051269 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:52:16.080315 1437114 cri.go:89] found id: ""
	I1209 05:52:16.080343 1437114 logs.go:282] 0 containers: []
	W1209 05:52:16.080352 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:52:16.080358 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:52:16.080421 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:52:16.106254 1437114 cri.go:89] found id: ""
	I1209 05:52:16.106329 1437114 logs.go:282] 0 containers: []
	W1209 05:52:16.106344 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:52:16.106351 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:52:16.106419 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:52:16.130691 1437114 cri.go:89] found id: ""
	I1209 05:52:16.130717 1437114 logs.go:282] 0 containers: []
	W1209 05:52:16.130726 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:52:16.130732 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:52:16.130788 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:52:16.156232 1437114 cri.go:89] found id: ""
	I1209 05:52:16.156257 1437114 logs.go:282] 0 containers: []
	W1209 05:52:16.156266 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:52:16.156272 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:52:16.156333 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:52:16.186070 1437114 cri.go:89] found id: ""
	I1209 05:52:16.186091 1437114 logs.go:282] 0 containers: []
	W1209 05:52:16.186100 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:52:16.186109 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:52:16.186121 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:52:16.203551 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:52:16.203579 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:52:16.280037 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:52:16.272128    2631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:16.272800    2631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:16.274272    2631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:16.274686    2631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:16.276185    2631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:52:16.272128    2631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:16.272800    2631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:16.274272    2631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:16.274686    2631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:16.276185    2631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:52:16.280087 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:52:16.280102 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:52:16.304445 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:52:16.304479 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:52:16.333574 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:52:16.333599 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:52:18.890807 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:52:18.901129 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:52:18.901207 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:52:18.925553 1437114 cri.go:89] found id: ""
	I1209 05:52:18.925576 1437114 logs.go:282] 0 containers: []
	W1209 05:52:18.925584 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:52:18.925590 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:52:18.925648 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:52:18.951104 1437114 cri.go:89] found id: ""
	I1209 05:52:18.951180 1437114 logs.go:282] 0 containers: []
	W1209 05:52:18.951203 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:52:18.951221 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:52:18.951309 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:52:18.975343 1437114 cri.go:89] found id: ""
	I1209 05:52:18.975407 1437114 logs.go:282] 0 containers: []
	W1209 05:52:18.975432 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:52:18.975450 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:52:18.975535 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:52:18.999522 1437114 cri.go:89] found id: ""
	I1209 05:52:18.999596 1437114 logs.go:282] 0 containers: []
	W1209 05:52:18.999619 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:52:18.999637 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:52:18.999722 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:52:19.025106 1437114 cri.go:89] found id: ""
	I1209 05:52:19.025181 1437114 logs.go:282] 0 containers: []
	W1209 05:52:19.025203 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:52:19.025221 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:52:19.025307 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:52:19.047867 1437114 cri.go:89] found id: ""
	I1209 05:52:19.047944 1437114 logs.go:282] 0 containers: []
	W1209 05:52:19.047966 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:52:19.048006 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:52:19.048106 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:52:19.071487 1437114 cri.go:89] found id: ""
	I1209 05:52:19.071511 1437114 logs.go:282] 0 containers: []
	W1209 05:52:19.071519 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:52:19.071526 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:52:19.071585 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:52:19.096506 1437114 cri.go:89] found id: ""
	I1209 05:52:19.096531 1437114 logs.go:282] 0 containers: []
	W1209 05:52:19.096540 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:52:19.096549 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:52:19.096595 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:52:19.111961 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:52:19.112001 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:52:19.184448 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:52:19.173564    2740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:19.174163    2740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:19.175662    2740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:19.176275    2740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:19.178917    2740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:52:19.173564    2740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:19.174163    2740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:19.175662    2740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:19.176275    2740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:19.178917    2740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:52:19.184473 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:52:19.184487 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:52:19.213109 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:52:19.213148 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:52:19.242001 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:52:19.242036 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:52:21.800441 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:52:21.810634 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:52:21.810706 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:52:21.835147 1437114 cri.go:89] found id: ""
	I1209 05:52:21.835171 1437114 logs.go:282] 0 containers: []
	W1209 05:52:21.835180 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:52:21.835186 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:52:21.835244 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:52:21.863735 1437114 cri.go:89] found id: ""
	I1209 05:52:21.863760 1437114 logs.go:282] 0 containers: []
	W1209 05:52:21.863769 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:52:21.863775 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:52:21.863833 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:52:21.887643 1437114 cri.go:89] found id: ""
	I1209 05:52:21.887667 1437114 logs.go:282] 0 containers: []
	W1209 05:52:21.887676 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:52:21.887682 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:52:21.887738 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:52:21.912358 1437114 cri.go:89] found id: ""
	I1209 05:52:21.912384 1437114 logs.go:282] 0 containers: []
	W1209 05:52:21.912392 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:52:21.912399 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:52:21.912458 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:52:21.941394 1437114 cri.go:89] found id: ""
	I1209 05:52:21.941420 1437114 logs.go:282] 0 containers: []
	W1209 05:52:21.941429 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:52:21.941435 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:52:21.941521 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:52:21.945768 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 05:52:21.973669 1437114 cri.go:89] found id: ""
	I1209 05:52:21.973703 1437114 logs.go:282] 0 containers: []
	W1209 05:52:21.973712 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:52:21.973734 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:52:21.973814 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	W1209 05:52:22.028092 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:52:22.028115 1437114 cri.go:89] found id: ""
	I1209 05:52:22.028247 1437114 logs.go:282] 0 containers: []
	W1209 05:52:22.028256 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:52:22.028268 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	W1209 05:52:22.028296 1437114 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1209 05:52:22.028335 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:52:22.054827 1437114 cri.go:89] found id: ""
	I1209 05:52:22.054854 1437114 logs.go:282] 0 containers: []
	W1209 05:52:22.054862 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:52:22.054871 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:52:22.054883 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:52:22.081941 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:52:22.081985 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:52:22.109801 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:52:22.109829 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:52:22.167418 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:52:22.167455 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:52:22.186947 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:52:22.187039 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:52:22.274107 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:52:22.265076    2876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:22.265712    2876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:22.267349    2876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:22.267990    2876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:22.269553    2876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:52:22.265076    2876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:22.265712    2876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:22.267349    2876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:22.267990    2876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:22.269553    2876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:52:24.774371 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:52:24.785291 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:52:24.785383 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:52:24.810496 1437114 cri.go:89] found id: ""
	I1209 05:52:24.810521 1437114 logs.go:282] 0 containers: []
	W1209 05:52:24.810530 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:52:24.810537 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:52:24.810641 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:52:24.840246 1437114 cri.go:89] found id: ""
	I1209 05:52:24.840283 1437114 logs.go:282] 0 containers: []
	W1209 05:52:24.840292 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:52:24.840298 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:52:24.840383 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:52:24.866227 1437114 cri.go:89] found id: ""
	I1209 05:52:24.866252 1437114 logs.go:282] 0 containers: []
	W1209 05:52:24.866267 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:52:24.866274 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:52:24.866334 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:52:24.894487 1437114 cri.go:89] found id: ""
	I1209 05:52:24.894512 1437114 logs.go:282] 0 containers: []
	W1209 05:52:24.894521 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:52:24.894528 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:52:24.894592 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:52:24.919081 1437114 cri.go:89] found id: ""
	I1209 05:52:24.919106 1437114 logs.go:282] 0 containers: []
	W1209 05:52:24.919115 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:52:24.919122 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:52:24.919182 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:52:24.942639 1437114 cri.go:89] found id: ""
	I1209 05:52:24.942664 1437114 logs.go:282] 0 containers: []
	W1209 05:52:24.942673 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:52:24.942679 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:52:24.942736 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:52:24.966811 1437114 cri.go:89] found id: ""
	I1209 05:52:24.966835 1437114 logs.go:282] 0 containers: []
	W1209 05:52:24.966844 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:52:24.966849 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:52:24.966906 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:52:24.990491 1437114 cri.go:89] found id: ""
	I1209 05:52:24.990515 1437114 logs.go:282] 0 containers: []
	W1209 05:52:24.990524 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:52:24.990533 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:52:24.990544 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:52:25.049211 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:52:25.049244 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:52:25.065441 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:52:25.065469 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:52:25.128713 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:52:25.120700    2971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:25.121283    2971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:25.122776    2971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:25.123296    2971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:25.124752    2971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:52:25.120700    2971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:25.121283    2971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:25.122776    2971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:25.123296    2971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:25.124752    2971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:52:25.128735 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:52:25.128750 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:52:25.154485 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:52:25.154518 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:52:27.686448 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:52:27.697271 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:52:27.697388 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:52:27.723850 1437114 cri.go:89] found id: ""
	I1209 05:52:27.723930 1437114 logs.go:282] 0 containers: []
	W1209 05:52:27.723953 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:52:27.723970 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:52:27.724082 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:52:27.749864 1437114 cri.go:89] found id: ""
	I1209 05:52:27.749889 1437114 logs.go:282] 0 containers: []
	W1209 05:52:27.749897 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:52:27.749904 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:52:27.749989 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:52:27.773124 1437114 cri.go:89] found id: ""
	I1209 05:52:27.773151 1437114 logs.go:282] 0 containers: []
	W1209 05:52:27.773167 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:52:27.773174 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:52:27.773238 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:52:27.802090 1437114 cri.go:89] found id: ""
	I1209 05:52:27.802118 1437114 logs.go:282] 0 containers: []
	W1209 05:52:27.802128 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:52:27.802134 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:52:27.802193 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:52:27.827324 1437114 cri.go:89] found id: ""
	I1209 05:52:27.827349 1437114 logs.go:282] 0 containers: []
	W1209 05:52:27.827361 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:52:27.827367 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:52:27.827425 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:52:27.855877 1437114 cri.go:89] found id: ""
	I1209 05:52:27.855905 1437114 logs.go:282] 0 containers: []
	W1209 05:52:27.855914 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:52:27.855920 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:52:27.855980 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:52:27.880242 1437114 cri.go:89] found id: ""
	I1209 05:52:27.880322 1437114 logs.go:282] 0 containers: []
	W1209 05:52:27.880346 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:52:27.880365 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:52:27.880457 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:52:27.903986 1437114 cri.go:89] found id: ""
	I1209 05:52:27.904032 1437114 logs.go:282] 0 containers: []
	W1209 05:52:27.904041 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:52:27.904079 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:52:27.904100 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:52:27.937811 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:52:27.937838 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:52:27.993533 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:52:27.993570 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:52:28.010780 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:52:28.010818 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:52:28.075391 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:52:28.066786    3096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:28.067667    3096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:28.069423    3096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:28.069776    3096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:28.071145    3096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:52:28.066786    3096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:28.067667    3096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:28.069423    3096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:28.069776    3096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:28.071145    3096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:52:28.075424 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:52:28.075454 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:52:30.602097 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:52:30.612434 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:52:30.612508 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:52:30.638153 1437114 cri.go:89] found id: ""
	I1209 05:52:30.638183 1437114 logs.go:282] 0 containers: []
	W1209 05:52:30.638191 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:52:30.638197 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:52:30.638280 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:52:30.664120 1437114 cri.go:89] found id: ""
	I1209 05:52:30.664206 1437114 logs.go:282] 0 containers: []
	W1209 05:52:30.664221 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:52:30.664229 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:52:30.664291 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:52:30.695098 1437114 cri.go:89] found id: ""
	I1209 05:52:30.695124 1437114 logs.go:282] 0 containers: []
	W1209 05:52:30.695132 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:52:30.695138 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:52:30.695196 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:52:30.728679 1437114 cri.go:89] found id: ""
	I1209 05:52:30.728703 1437114 logs.go:282] 0 containers: []
	W1209 05:52:30.728711 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:52:30.728718 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:52:30.728777 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:52:30.757085 1437114 cri.go:89] found id: ""
	I1209 05:52:30.757108 1437114 logs.go:282] 0 containers: []
	W1209 05:52:30.757116 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:52:30.757122 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:52:30.757190 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:52:30.781813 1437114 cri.go:89] found id: ""
	I1209 05:52:30.781838 1437114 logs.go:282] 0 containers: []
	W1209 05:52:30.781847 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:52:30.781853 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:52:30.781931 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:52:30.805893 1437114 cri.go:89] found id: ""
	I1209 05:52:30.805958 1437114 logs.go:282] 0 containers: []
	W1209 05:52:30.805972 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:52:30.805980 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:52:30.806045 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:52:30.838632 1437114 cri.go:89] found id: ""
	I1209 05:52:30.838657 1437114 logs.go:282] 0 containers: []
	W1209 05:52:30.838666 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:52:30.838675 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:52:30.838686 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:52:30.853978 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:52:30.854004 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:52:30.918110 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:52:30.910818    3197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:30.911400    3197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:30.912432    3197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:30.912927    3197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:30.914407    3197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:52:30.910818    3197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:30.911400    3197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:30.912432    3197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:30.912927    3197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:30.914407    3197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:52:30.918132 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:52:30.918144 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:52:30.943105 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:52:30.943142 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:52:30.969706 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:52:30.969735 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:52:33.525286 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:52:33.535730 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:52:33.535803 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:52:33.559344 1437114 cri.go:89] found id: ""
	I1209 05:52:33.559369 1437114 logs.go:282] 0 containers: []
	W1209 05:52:33.559378 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:52:33.559384 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:52:33.559441 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:52:33.588185 1437114 cri.go:89] found id: ""
	I1209 05:52:33.588254 1437114 logs.go:282] 0 containers: []
	W1209 05:52:33.588278 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:52:33.588292 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:52:33.588366 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:52:33.613255 1437114 cri.go:89] found id: ""
	I1209 05:52:33.613279 1437114 logs.go:282] 0 containers: []
	W1209 05:52:33.613288 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:52:33.613295 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:52:33.613382 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:52:33.636919 1437114 cri.go:89] found id: ""
	I1209 05:52:33.636953 1437114 logs.go:282] 0 containers: []
	W1209 05:52:33.636961 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:52:33.636968 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:52:33.637035 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:52:33.666309 1437114 cri.go:89] found id: ""
	I1209 05:52:33.666342 1437114 logs.go:282] 0 containers: []
	W1209 05:52:33.666351 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:52:33.666358 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:52:33.666424 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:52:33.698208 1437114 cri.go:89] found id: ""
	I1209 05:52:33.698283 1437114 logs.go:282] 0 containers: []
	W1209 05:52:33.698305 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:52:33.698324 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:52:33.698413 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:52:33.730383 1437114 cri.go:89] found id: ""
	I1209 05:52:33.730456 1437114 logs.go:282] 0 containers: []
	W1209 05:52:33.730479 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:52:33.730499 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:52:33.730585 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:52:33.759854 1437114 cri.go:89] found id: ""
	I1209 05:52:33.759930 1437114 logs.go:282] 0 containers: []
	W1209 05:52:33.759952 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:52:33.759972 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:52:33.760007 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:52:33.822572 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:52:33.815081    3306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:33.815468    3306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:33.816948    3306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:33.817250    3306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:33.818729    3306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:52:33.815081    3306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:33.815468    3306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:33.816948    3306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:33.817250    3306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:33.818729    3306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:52:33.822593 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:52:33.822606 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:52:33.848713 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:52:33.848751 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:52:33.875169 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:52:33.875202 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:52:33.929863 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:52:33.929899 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:52:36.446655 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:52:36.457494 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:52:36.457564 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:52:36.489953 1437114 cri.go:89] found id: ""
	I1209 05:52:36.490015 1437114 logs.go:282] 0 containers: []
	W1209 05:52:36.490045 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:52:36.490069 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:52:36.490171 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:52:36.518208 1437114 cri.go:89] found id: ""
	I1209 05:52:36.518232 1437114 logs.go:282] 0 containers: []
	W1209 05:52:36.518240 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:52:36.518246 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:52:36.518303 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:52:36.546757 1437114 cri.go:89] found id: ""
	I1209 05:52:36.546830 1437114 logs.go:282] 0 containers: []
	W1209 05:52:36.546852 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:52:36.546870 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:52:36.546958 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:52:36.573478 1437114 cri.go:89] found id: ""
	I1209 05:52:36.573504 1437114 logs.go:282] 0 containers: []
	W1209 05:52:36.573512 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:52:36.573518 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:52:36.573573 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:52:36.597359 1437114 cri.go:89] found id: ""
	I1209 05:52:36.597384 1437114 logs.go:282] 0 containers: []
	W1209 05:52:36.597392 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:52:36.597399 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:52:36.597456 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:52:36.626723 1437114 cri.go:89] found id: ""
	I1209 05:52:36.626750 1437114 logs.go:282] 0 containers: []
	W1209 05:52:36.626758 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:52:36.626765 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:52:36.626821 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:52:36.651878 1437114 cri.go:89] found id: ""
	I1209 05:52:36.651904 1437114 logs.go:282] 0 containers: []
	W1209 05:52:36.651913 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:52:36.651920 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:52:36.651983 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:52:36.677687 1437114 cri.go:89] found id: ""
	I1209 05:52:36.677763 1437114 logs.go:282] 0 containers: []
	W1209 05:52:36.677786 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:52:36.677806 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:52:36.677844 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:52:36.762388 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:52:36.754574    3419 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:36.755265    3419 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:36.756812    3419 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:36.757117    3419 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:36.758563    3419 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:52:36.754574    3419 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:36.755265    3419 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:36.756812    3419 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:36.757117    3419 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:36.758563    3419 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:52:36.762408 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:52:36.762421 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:52:36.787210 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:52:36.787245 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:52:36.813523 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:52:36.813549 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:52:36.871098 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:52:36.871134 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:52:39.145660 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1209 05:52:39.203856 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 05:52:39.203957 1437114 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1209 05:52:39.388175 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:52:39.398492 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:52:39.398583 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:52:39.425881 1437114 cri.go:89] found id: ""
	I1209 05:52:39.425914 1437114 logs.go:282] 0 containers: []
	W1209 05:52:39.425924 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:52:39.425930 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:52:39.425998 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:52:39.450356 1437114 cri.go:89] found id: ""
	I1209 05:52:39.450390 1437114 logs.go:282] 0 containers: []
	W1209 05:52:39.450399 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:52:39.450405 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:52:39.450472 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:52:39.482441 1437114 cri.go:89] found id: ""
	I1209 05:52:39.482475 1437114 logs.go:282] 0 containers: []
	W1209 05:52:39.482483 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:52:39.482490 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:52:39.482554 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:52:39.512577 1437114 cri.go:89] found id: ""
	I1209 05:52:39.512602 1437114 logs.go:282] 0 containers: []
	W1209 05:52:39.512611 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:52:39.512617 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:52:39.512674 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:52:39.537514 1437114 cri.go:89] found id: ""
	I1209 05:52:39.537539 1437114 logs.go:282] 0 containers: []
	W1209 05:52:39.537547 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:52:39.537559 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:52:39.537620 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:52:39.561319 1437114 cri.go:89] found id: ""
	I1209 05:52:39.561352 1437114 logs.go:282] 0 containers: []
	W1209 05:52:39.561360 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:52:39.561366 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:52:39.561442 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:52:39.589300 1437114 cri.go:89] found id: ""
	I1209 05:52:39.589324 1437114 logs.go:282] 0 containers: []
	W1209 05:52:39.589333 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:52:39.589339 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:52:39.589398 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:52:39.620288 1437114 cri.go:89] found id: ""
	I1209 05:52:39.620312 1437114 logs.go:282] 0 containers: []
	W1209 05:52:39.620321 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:52:39.620339 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:52:39.620351 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:52:39.678215 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:52:39.678293 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:52:39.697337 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:52:39.697364 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:52:39.767115 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:52:39.758981    3543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:39.759384    3543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:39.761296    3543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:39.761699    3543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:39.763232    3543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:52:39.758981    3543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:39.759384    3543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:39.761296    3543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:39.761699    3543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:39.763232    3543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:52:39.767135 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:52:39.767147 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:52:39.791949 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:52:39.791985 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:52:42.324195 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:52:42.339508 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:52:42.339591 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:52:42.370155 1437114 cri.go:89] found id: ""
	I1209 05:52:42.370181 1437114 logs.go:282] 0 containers: []
	W1209 05:52:42.370192 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:52:42.370199 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:52:42.370268 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:52:42.395020 1437114 cri.go:89] found id: ""
	I1209 05:52:42.395054 1437114 logs.go:282] 0 containers: []
	W1209 05:52:42.395063 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:52:42.395069 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:52:42.395136 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:52:42.423571 1437114 cri.go:89] found id: ""
	I1209 05:52:42.423604 1437114 logs.go:282] 0 containers: []
	W1209 05:52:42.423612 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:52:42.423618 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:52:42.423684 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:52:42.449744 1437114 cri.go:89] found id: ""
	I1209 05:52:42.449821 1437114 logs.go:282] 0 containers: []
	W1209 05:52:42.449846 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:52:42.449865 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:52:42.449951 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:52:42.476838 1437114 cri.go:89] found id: ""
	I1209 05:52:42.476864 1437114 logs.go:282] 0 containers: []
	W1209 05:52:42.476872 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:52:42.476879 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:52:42.476957 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:52:42.505251 1437114 cri.go:89] found id: ""
	I1209 05:52:42.505278 1437114 logs.go:282] 0 containers: []
	W1209 05:52:42.505287 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:52:42.505294 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:52:42.505372 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:52:42.529646 1437114 cri.go:89] found id: ""
	I1209 05:52:42.529712 1437114 logs.go:282] 0 containers: []
	W1209 05:52:42.529728 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:52:42.529741 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:52:42.529803 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:52:42.553792 1437114 cri.go:89] found id: ""
	I1209 05:52:42.553818 1437114 logs.go:282] 0 containers: []
	W1209 05:52:42.553827 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:52:42.553836 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:52:42.553865 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:52:42.610712 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:52:42.610750 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:52:42.626470 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:52:42.626498 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:52:42.691633 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:52:42.681192    3655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:42.683916    3655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:42.685453    3655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:42.685744    3655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:42.687188    3655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:52:42.681192    3655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:42.683916    3655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:42.685453    3655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:42.685744    3655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:42.687188    3655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:52:42.691658 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:52:42.691672 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:52:42.721023 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:52:42.721056 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:52:45.257072 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:52:45.279876 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:52:45.279970 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:52:45.310797 1437114 cri.go:89] found id: ""
	I1209 05:52:45.310822 1437114 logs.go:282] 0 containers: []
	W1209 05:52:45.310831 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:52:45.310837 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:52:45.310915 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:52:45.339967 1437114 cri.go:89] found id: ""
	I1209 05:52:45.339990 1437114 logs.go:282] 0 containers: []
	W1209 05:52:45.339999 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:52:45.340004 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:52:45.340083 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:52:45.368323 1437114 cri.go:89] found id: ""
	I1209 05:52:45.368351 1437114 logs.go:282] 0 containers: []
	W1209 05:52:45.368360 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:52:45.368368 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:52:45.368427 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:52:45.393892 1437114 cri.go:89] found id: ""
	I1209 05:52:45.393918 1437114 logs.go:282] 0 containers: []
	W1209 05:52:45.393926 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:52:45.393932 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:52:45.393995 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:52:45.418992 1437114 cri.go:89] found id: ""
	I1209 05:52:45.419025 1437114 logs.go:282] 0 containers: []
	W1209 05:52:45.419035 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:52:45.419041 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:52:45.419107 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:52:45.461356 1437114 cri.go:89] found id: ""
	I1209 05:52:45.461392 1437114 logs.go:282] 0 containers: []
	W1209 05:52:45.461401 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:52:45.461407 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:52:45.461481 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:52:45.493718 1437114 cri.go:89] found id: ""
	I1209 05:52:45.493753 1437114 logs.go:282] 0 containers: []
	W1209 05:52:45.493762 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:52:45.493768 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:52:45.493836 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:52:45.517850 1437114 cri.go:89] found id: ""
	I1209 05:52:45.517876 1437114 logs.go:282] 0 containers: []
	W1209 05:52:45.517898 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:52:45.517907 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:52:45.517922 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:52:45.576699 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:52:45.576736 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:52:45.592339 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:52:45.592368 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:52:45.660368 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:52:45.651938    3766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:45.652711    3766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:45.654414    3766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:45.654934    3766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:45.656559    3766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:52:45.651938    3766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:45.652711    3766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:45.654414    3766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:45.654934    3766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:45.656559    3766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:52:45.660391 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:52:45.660404 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:52:45.687142 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:52:45.687222 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:52:48.227261 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:52:48.237593 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:52:48.237680 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:52:48.260468 1437114 cri.go:89] found id: ""
	I1209 05:52:48.260493 1437114 logs.go:282] 0 containers: []
	W1209 05:52:48.260502 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:52:48.260509 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:52:48.260570 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:52:48.289034 1437114 cri.go:89] found id: ""
	I1209 05:52:48.289059 1437114 logs.go:282] 0 containers: []
	W1209 05:52:48.289068 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:52:48.289074 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:52:48.289150 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:52:48.316323 1437114 cri.go:89] found id: ""
	I1209 05:52:48.316349 1437114 logs.go:282] 0 containers: []
	W1209 05:52:48.316358 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:52:48.316364 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:52:48.316434 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:52:48.342218 1437114 cri.go:89] found id: ""
	I1209 05:52:48.342240 1437114 logs.go:282] 0 containers: []
	W1209 05:52:48.342249 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:52:48.342255 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:52:48.342308 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:52:48.371363 1437114 cri.go:89] found id: ""
	I1209 05:52:48.371390 1437114 logs.go:282] 0 containers: []
	W1209 05:52:48.371399 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:52:48.371406 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:52:48.371466 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:52:48.395178 1437114 cri.go:89] found id: ""
	I1209 05:52:48.395204 1437114 logs.go:282] 0 containers: []
	W1209 05:52:48.395212 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:52:48.395218 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:52:48.395274 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:52:48.419670 1437114 cri.go:89] found id: ""
	I1209 05:52:48.419709 1437114 logs.go:282] 0 containers: []
	W1209 05:52:48.419718 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:52:48.419740 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:52:48.419825 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:52:48.461924 1437114 cri.go:89] found id: ""
	I1209 05:52:48.461946 1437114 logs.go:282] 0 containers: []
	W1209 05:52:48.461954 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:52:48.461963 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:52:48.461974 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:52:48.528889 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:52:48.528926 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:52:48.544946 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:52:48.544976 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:52:48.610447 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:52:48.602428    3879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:48.603193    3879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:48.604673    3879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:48.605169    3879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:48.606641    3879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:52:48.602428    3879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:48.603193    3879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:48.604673    3879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:48.605169    3879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:48.606641    3879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:52:48.610466 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:52:48.610478 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:52:48.636193 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:52:48.636232 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:52:49.474531 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1209 05:52:49.539382 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 05:52:49.539481 1437114 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1209 05:52:49.543501 1437114 out.go:179] * Enabled addons: 
	I1209 05:52:49.546285 1437114 addons.go:530] duration metric: took 1m54.169473068s for enable addons: enabled=[]
	I1209 05:52:51.163525 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:52:51.174339 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:52:51.174465 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:52:51.198800 1437114 cri.go:89] found id: ""
	I1209 05:52:51.198828 1437114 logs.go:282] 0 containers: []
	W1209 05:52:51.198837 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:52:51.198843 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:52:51.198901 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:52:51.224524 1437114 cri.go:89] found id: ""
	I1209 05:52:51.224552 1437114 logs.go:282] 0 containers: []
	W1209 05:52:51.224561 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:52:51.224568 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:52:51.224626 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:52:51.249032 1437114 cri.go:89] found id: ""
	I1209 05:52:51.249099 1437114 logs.go:282] 0 containers: []
	W1209 05:52:51.249122 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:52:51.249136 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:52:51.249210 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:52:51.272901 1437114 cri.go:89] found id: ""
	I1209 05:52:51.272929 1437114 logs.go:282] 0 containers: []
	W1209 05:52:51.272937 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:52:51.272950 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:52:51.273011 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:52:51.296909 1437114 cri.go:89] found id: ""
	I1209 05:52:51.296935 1437114 logs.go:282] 0 containers: []
	W1209 05:52:51.296943 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:52:51.296949 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:52:51.297007 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:52:51.325419 1437114 cri.go:89] found id: ""
	I1209 05:52:51.325499 1437114 logs.go:282] 0 containers: []
	W1209 05:52:51.325522 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:52:51.325537 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:52:51.325609 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:52:51.350449 1437114 cri.go:89] found id: ""
	I1209 05:52:51.350475 1437114 logs.go:282] 0 containers: []
	W1209 05:52:51.350484 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:52:51.350490 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:52:51.350571 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:52:51.378459 1437114 cri.go:89] found id: ""
	I1209 05:52:51.378482 1437114 logs.go:282] 0 containers: []
	W1209 05:52:51.378490 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:52:51.378501 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:52:51.378512 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:52:51.439032 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:52:51.439075 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:52:51.457325 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:52:51.457355 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:52:51.525486 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:52:51.517693    3998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:51.518243    3998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:51.519766    3998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:51.520306    3998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:51.521762    3998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:52:51.517693    3998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:51.518243    3998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:51.519766    3998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:51.520306    3998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:51.521762    3998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:52:51.525549 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:52:51.525570 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:52:51.551425 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:52:51.551463 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:52:54.078624 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:52:54.089324 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:52:54.089395 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:52:54.117819 1437114 cri.go:89] found id: ""
	I1209 05:52:54.117840 1437114 logs.go:282] 0 containers: []
	W1209 05:52:54.117856 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:52:54.117863 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:52:54.117923 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:52:54.143006 1437114 cri.go:89] found id: ""
	I1209 05:52:54.143083 1437114 logs.go:282] 0 containers: []
	W1209 05:52:54.143105 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:52:54.143125 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:52:54.143200 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:52:54.168655 1437114 cri.go:89] found id: ""
	I1209 05:52:54.168715 1437114 logs.go:282] 0 containers: []
	W1209 05:52:54.168742 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:52:54.168758 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:52:54.168847 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:52:54.193433 1437114 cri.go:89] found id: ""
	I1209 05:52:54.193459 1437114 logs.go:282] 0 containers: []
	W1209 05:52:54.193467 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:52:54.193474 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:52:54.193558 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:52:54.216587 1437114 cri.go:89] found id: ""
	I1209 05:52:54.216663 1437114 logs.go:282] 0 containers: []
	W1209 05:52:54.216686 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:52:54.216700 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:52:54.216775 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:52:54.240686 1437114 cri.go:89] found id: ""
	I1209 05:52:54.240723 1437114 logs.go:282] 0 containers: []
	W1209 05:52:54.240732 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:52:54.240739 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:52:54.240830 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:52:54.264680 1437114 cri.go:89] found id: ""
	I1209 05:52:54.264710 1437114 logs.go:282] 0 containers: []
	W1209 05:52:54.264719 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:52:54.264725 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:52:54.264785 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:52:54.288715 1437114 cri.go:89] found id: ""
	I1209 05:52:54.288739 1437114 logs.go:282] 0 containers: []
	W1209 05:52:54.288748 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:52:54.288757 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:52:54.288769 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:52:54.344591 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:52:54.344629 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:52:54.360275 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:52:54.360350 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:52:54.422057 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:52:54.413842    4106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:54.414541    4106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:54.416178    4106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:54.416655    4106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:54.418204    4106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:52:54.413842    4106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:54.414541    4106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:54.416178    4106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:54.416655    4106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:54.418204    4106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:52:54.422081 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:52:54.422093 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:52:54.451978 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:52:54.452157 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:52:56.987228 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:52:56.997370 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:52:56.997440 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:52:57.026856 1437114 cri.go:89] found id: ""
	I1209 05:52:57.026878 1437114 logs.go:282] 0 containers: []
	W1209 05:52:57.026886 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:52:57.026893 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:52:57.026955 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:52:57.052417 1437114 cri.go:89] found id: ""
	I1209 05:52:57.052442 1437114 logs.go:282] 0 containers: []
	W1209 05:52:57.052450 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:52:57.052457 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:52:57.052517 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:52:57.079492 1437114 cri.go:89] found id: ""
	I1209 05:52:57.079516 1437114 logs.go:282] 0 containers: []
	W1209 05:52:57.079526 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:52:57.079532 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:52:57.079590 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:52:57.103111 1437114 cri.go:89] found id: ""
	I1209 05:52:57.103135 1437114 logs.go:282] 0 containers: []
	W1209 05:52:57.103144 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:52:57.103150 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:52:57.103212 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:52:57.129591 1437114 cri.go:89] found id: ""
	I1209 05:52:57.129616 1437114 logs.go:282] 0 containers: []
	W1209 05:52:57.129624 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:52:57.129631 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:52:57.129706 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:52:57.153092 1437114 cri.go:89] found id: ""
	I1209 05:52:57.153115 1437114 logs.go:282] 0 containers: []
	W1209 05:52:57.153124 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:52:57.153131 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:52:57.153189 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:52:57.177623 1437114 cri.go:89] found id: ""
	I1209 05:52:57.177647 1437114 logs.go:282] 0 containers: []
	W1209 05:52:57.177656 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:52:57.177662 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:52:57.177748 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:52:57.202469 1437114 cri.go:89] found id: ""
	I1209 05:52:57.202493 1437114 logs.go:282] 0 containers: []
	W1209 05:52:57.202502 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:52:57.202511 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:52:57.202550 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:52:57.260356 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:52:57.260393 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:52:57.276459 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:52:57.276539 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:52:57.343015 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:52:57.335090    4217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:57.335845    4217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:57.337423    4217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:57.337717    4217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:57.339202    4217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:52:57.335090    4217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:57.335845    4217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:57.337423    4217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:57.337717    4217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:57.339202    4217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:52:57.343037 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:52:57.343052 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:52:57.368448 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:52:57.368485 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:52:59.899132 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:52:59.909390 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:52:59.909502 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:52:59.942228 1437114 cri.go:89] found id: ""
	I1209 05:52:59.942299 1437114 logs.go:282] 0 containers: []
	W1209 05:52:59.942333 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:52:59.942354 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:52:59.942464 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:52:59.967993 1437114 cri.go:89] found id: ""
	I1209 05:52:59.968090 1437114 logs.go:282] 0 containers: []
	W1209 05:52:59.968105 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:52:59.968112 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:52:59.968183 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:53:00.004409 1437114 cri.go:89] found id: ""
	I1209 05:53:00.004444 1437114 logs.go:282] 0 containers: []
	W1209 05:53:00.004453 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:53:00.004461 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:53:00.004542 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:53:00.122181 1437114 cri.go:89] found id: ""
	I1209 05:53:00.122206 1437114 logs.go:282] 0 containers: []
	W1209 05:53:00.122216 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:53:00.122238 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:53:00.122319 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:53:00.178386 1437114 cri.go:89] found id: ""
	I1209 05:53:00.178469 1437114 logs.go:282] 0 containers: []
	W1209 05:53:00.178481 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:53:00.178488 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:53:00.178720 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:53:00.226314 1437114 cri.go:89] found id: ""
	I1209 05:53:00.226451 1437114 logs.go:282] 0 containers: []
	W1209 05:53:00.226477 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:53:00.226486 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:53:00.226568 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:53:00.271734 1437114 cri.go:89] found id: ""
	I1209 05:53:00.271771 1437114 logs.go:282] 0 containers: []
	W1209 05:53:00.271782 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:53:00.271790 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:53:00.271932 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:53:00.335362 1437114 cri.go:89] found id: ""
	I1209 05:53:00.335448 1437114 logs.go:282] 0 containers: []
	W1209 05:53:00.335466 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:53:00.335477 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:53:00.335493 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:53:00.365642 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:53:00.365684 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:53:00.400318 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:53:00.400349 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:53:00.462709 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:53:00.462752 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:53:00.480156 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:53:00.480188 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:53:00.548948 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:53:00.540982    4347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:00.541655    4347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:00.543286    4347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:00.543662    4347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:00.545115    4347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:53:00.540982    4347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:00.541655    4347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:00.543286    4347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:00.543662    4347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:00.545115    4347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:53:03.050610 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:53:03.061297 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:53:03.061406 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:53:03.090201 1437114 cri.go:89] found id: ""
	I1209 05:53:03.090232 1437114 logs.go:282] 0 containers: []
	W1209 05:53:03.090240 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:53:03.090248 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:53:03.090313 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:53:03.115399 1437114 cri.go:89] found id: ""
	I1209 05:53:03.115424 1437114 logs.go:282] 0 containers: []
	W1209 05:53:03.115432 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:53:03.115438 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:53:03.115497 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:53:03.138652 1437114 cri.go:89] found id: ""
	I1209 05:53:03.138685 1437114 logs.go:282] 0 containers: []
	W1209 05:53:03.138694 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:53:03.138700 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:53:03.138771 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:53:03.163354 1437114 cri.go:89] found id: ""
	I1209 05:53:03.163387 1437114 logs.go:282] 0 containers: []
	W1209 05:53:03.163396 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:53:03.163402 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:53:03.163467 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:53:03.189982 1437114 cri.go:89] found id: ""
	I1209 05:53:03.190008 1437114 logs.go:282] 0 containers: []
	W1209 05:53:03.190016 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:53:03.190023 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:53:03.190100 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:53:03.214072 1437114 cri.go:89] found id: ""
	I1209 05:53:03.214100 1437114 logs.go:282] 0 containers: []
	W1209 05:53:03.214109 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:53:03.214115 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:53:03.214193 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:53:03.238571 1437114 cri.go:89] found id: ""
	I1209 05:53:03.238605 1437114 logs.go:282] 0 containers: []
	W1209 05:53:03.238614 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:53:03.238620 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:53:03.238713 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:53:03.262760 1437114 cri.go:89] found id: ""
	I1209 05:53:03.262791 1437114 logs.go:282] 0 containers: []
	W1209 05:53:03.262800 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:53:03.262825 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:53:03.262848 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:53:03.278402 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:53:03.278430 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:53:03.340382 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:53:03.332086    4439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:03.332485    4439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:03.334108    4439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:03.334685    4439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:03.336430    4439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:53:03.332086    4439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:03.332485    4439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:03.334108    4439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:03.334685    4439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:03.336430    4439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:53:03.340405 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:53:03.340420 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:53:03.367157 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:53:03.367193 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:53:03.394767 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:53:03.394794 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:53:05.953212 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:53:05.965657 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:53:05.965739 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:53:06.020272 1437114 cri.go:89] found id: ""
	I1209 05:53:06.020296 1437114 logs.go:282] 0 containers: []
	W1209 05:53:06.020305 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:53:06.020311 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:53:06.020379 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:53:06.045735 1437114 cri.go:89] found id: ""
	I1209 05:53:06.045757 1437114 logs.go:282] 0 containers: []
	W1209 05:53:06.045766 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:53:06.045772 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:53:06.045832 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:53:06.072090 1437114 cri.go:89] found id: ""
	I1209 05:53:06.072119 1437114 logs.go:282] 0 containers: []
	W1209 05:53:06.072129 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:53:06.072136 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:53:06.072225 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:53:06.097096 1437114 cri.go:89] found id: ""
	I1209 05:53:06.097121 1437114 logs.go:282] 0 containers: []
	W1209 05:53:06.097130 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:53:06.097137 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:53:06.097214 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:53:06.121406 1437114 cri.go:89] found id: ""
	I1209 05:53:06.121431 1437114 logs.go:282] 0 containers: []
	W1209 05:53:06.121439 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:53:06.121446 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:53:06.121503 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:53:06.146550 1437114 cri.go:89] found id: ""
	I1209 05:53:06.146585 1437114 logs.go:282] 0 containers: []
	W1209 05:53:06.146594 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:53:06.146601 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:53:06.146667 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:53:06.173744 1437114 cri.go:89] found id: ""
	I1209 05:53:06.173779 1437114 logs.go:282] 0 containers: []
	W1209 05:53:06.173788 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:53:06.173794 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:53:06.173852 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:53:06.196867 1437114 cri.go:89] found id: ""
	I1209 05:53:06.196892 1437114 logs.go:282] 0 containers: []
	W1209 05:53:06.196901 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:53:06.196911 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:53:06.196922 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:53:06.252507 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:53:06.252544 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:53:06.268558 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:53:06.268588 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:53:06.335400 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:53:06.327269    4553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:06.327995    4553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:06.329562    4553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:06.330075    4553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:06.331590    4553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:53:06.327269    4553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:06.327995    4553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:06.329562    4553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:06.330075    4553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:06.331590    4553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:53:06.335432 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:53:06.335445 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:53:06.361277 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:53:06.361311 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:53:08.892899 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:53:08.903128 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:53:08.903197 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:53:08.927271 1437114 cri.go:89] found id: ""
	I1209 05:53:08.927347 1437114 logs.go:282] 0 containers: []
	W1209 05:53:08.927363 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:53:08.927371 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:53:08.927437 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:53:08.958272 1437114 cri.go:89] found id: ""
	I1209 05:53:08.958296 1437114 logs.go:282] 0 containers: []
	W1209 05:53:08.958305 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:53:08.958312 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:53:08.958389 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:53:08.992109 1437114 cri.go:89] found id: ""
	I1209 05:53:08.992174 1437114 logs.go:282] 0 containers: []
	W1209 05:53:08.992196 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:53:08.992217 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:53:08.992284 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:53:09.021977 1437114 cri.go:89] found id: ""
	I1209 05:53:09.022053 1437114 logs.go:282] 0 containers: []
	W1209 05:53:09.022069 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:53:09.022076 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:53:09.022135 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:53:09.045707 1437114 cri.go:89] found id: ""
	I1209 05:53:09.045731 1437114 logs.go:282] 0 containers: []
	W1209 05:53:09.045739 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:53:09.045745 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:53:09.045801 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:53:09.070070 1437114 cri.go:89] found id: ""
	I1209 05:53:09.070103 1437114 logs.go:282] 0 containers: []
	W1209 05:53:09.070112 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:53:09.070118 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:53:09.070186 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:53:09.094488 1437114 cri.go:89] found id: ""
	I1209 05:53:09.094513 1437114 logs.go:282] 0 containers: []
	W1209 05:53:09.094530 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:53:09.094537 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:53:09.094606 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:53:09.118093 1437114 cri.go:89] found id: ""
	I1209 05:53:09.118132 1437114 logs.go:282] 0 containers: []
	W1209 05:53:09.118141 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:53:09.118150 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:53:09.118161 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:53:09.179308 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:53:09.171279    4659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:09.171791    4659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:09.173320    4659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:09.173784    4659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:09.175502    4659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:53:09.171279    4659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:09.171791    4659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:09.173320    4659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:09.173784    4659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:09.175502    4659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:53:09.179376 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:53:09.179404 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:53:09.204829 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:53:09.204867 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:53:09.232053 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:53:09.232131 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:53:09.292412 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:53:09.292453 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:53:11.810473 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:53:11.820642 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:53:11.820731 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:53:11.844911 1437114 cri.go:89] found id: ""
	I1209 05:53:11.844935 1437114 logs.go:282] 0 containers: []
	W1209 05:53:11.844944 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:53:11.844951 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:53:11.845057 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:53:11.868554 1437114 cri.go:89] found id: ""
	I1209 05:53:11.868628 1437114 logs.go:282] 0 containers: []
	W1209 05:53:11.868642 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:53:11.868649 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:53:11.868713 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:53:11.893204 1437114 cri.go:89] found id: ""
	I1209 05:53:11.893229 1437114 logs.go:282] 0 containers: []
	W1209 05:53:11.893237 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:53:11.893243 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:53:11.893307 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:53:11.922205 1437114 cri.go:89] found id: ""
	I1209 05:53:11.922235 1437114 logs.go:282] 0 containers: []
	W1209 05:53:11.922244 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:53:11.922250 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:53:11.922314 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:53:11.969099 1437114 cri.go:89] found id: ""
	I1209 05:53:11.969172 1437114 logs.go:282] 0 containers: []
	W1209 05:53:11.969195 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:53:11.969222 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:53:11.969335 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:53:11.999668 1437114 cri.go:89] found id: ""
	I1209 05:53:11.999694 1437114 logs.go:282] 0 containers: []
	W1209 05:53:11.999702 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:53:11.999709 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:53:11.999798 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:53:12.027989 1437114 cri.go:89] found id: ""
	I1209 05:53:12.028053 1437114 logs.go:282] 0 containers: []
	W1209 05:53:12.028062 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:53:12.028083 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:53:12.028182 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:53:12.060174 1437114 cri.go:89] found id: ""
	I1209 05:53:12.060202 1437114 logs.go:282] 0 containers: []
	W1209 05:53:12.060211 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:53:12.060220 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:53:12.060260 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:53:12.121282 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:53:12.121323 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:53:12.137566 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:53:12.137595 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:53:12.205667 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:53:12.197778    4774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:12.198341    4774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:12.199791    4774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:12.200371    4774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:12.201936    4774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:53:12.197778    4774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:12.198341    4774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:12.199791    4774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:12.200371    4774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:12.201936    4774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:53:12.205687 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:53:12.205700 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:53:12.230499 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:53:12.230532 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:53:14.761775 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:53:14.772764 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:53:14.772836 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:53:14.796366 1437114 cri.go:89] found id: ""
	I1209 05:53:14.796391 1437114 logs.go:282] 0 containers: []
	W1209 05:53:14.796399 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:53:14.796406 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:53:14.796479 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:53:14.821766 1437114 cri.go:89] found id: ""
	I1209 05:53:14.821793 1437114 logs.go:282] 0 containers: []
	W1209 05:53:14.821802 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:53:14.821808 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:53:14.821868 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:53:14.846798 1437114 cri.go:89] found id: ""
	I1209 05:53:14.846823 1437114 logs.go:282] 0 containers: []
	W1209 05:53:14.846832 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:53:14.846838 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:53:14.846896 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:53:14.870638 1437114 cri.go:89] found id: ""
	I1209 05:53:14.870668 1437114 logs.go:282] 0 containers: []
	W1209 05:53:14.870677 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:53:14.870683 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:53:14.870741 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:53:14.894543 1437114 cri.go:89] found id: ""
	I1209 05:53:14.894571 1437114 logs.go:282] 0 containers: []
	W1209 05:53:14.894580 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:53:14.894586 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:53:14.894650 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:53:14.918572 1437114 cri.go:89] found id: ""
	I1209 05:53:14.918601 1437114 logs.go:282] 0 containers: []
	W1209 05:53:14.918610 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:53:14.918617 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:53:14.918699 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:53:14.947884 1437114 cri.go:89] found id: ""
	I1209 05:53:14.947914 1437114 logs.go:282] 0 containers: []
	W1209 05:53:14.947922 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:53:14.947928 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:53:14.948004 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:53:14.989982 1437114 cri.go:89] found id: ""
	I1209 05:53:14.990055 1437114 logs.go:282] 0 containers: []
	W1209 05:53:14.990078 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:53:14.990099 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:53:14.990137 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:53:15.012208 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:53:15.012307 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:53:15.086674 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:53:15.078145    4882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:15.078866    4882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:15.080581    4882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:15.081087    4882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:15.082649    4882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:53:15.078145    4882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:15.078866    4882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:15.080581    4882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:15.081087    4882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:15.082649    4882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:53:15.086740 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:53:15.086766 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:53:15.112587 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:53:15.112623 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:53:15.141472 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:53:15.141502 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:53:17.701838 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:53:17.713895 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:53:17.713963 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:53:17.745334 1437114 cri.go:89] found id: ""
	I1209 05:53:17.745357 1437114 logs.go:282] 0 containers: []
	W1209 05:53:17.745366 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:53:17.745372 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:53:17.745470 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:53:17.770153 1437114 cri.go:89] found id: ""
	I1209 05:53:17.770220 1437114 logs.go:282] 0 containers: []
	W1209 05:53:17.770244 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:53:17.770263 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:53:17.770326 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:53:17.795244 1437114 cri.go:89] found id: ""
	I1209 05:53:17.795278 1437114 logs.go:282] 0 containers: []
	W1209 05:53:17.795287 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:53:17.795293 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:53:17.795388 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:53:17.822017 1437114 cri.go:89] found id: ""
	I1209 05:53:17.822040 1437114 logs.go:282] 0 containers: []
	W1209 05:53:17.822049 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:53:17.822055 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:53:17.822132 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:53:17.850510 1437114 cri.go:89] found id: ""
	I1209 05:53:17.850532 1437114 logs.go:282] 0 containers: []
	W1209 05:53:17.850541 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:53:17.850566 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:53:17.850624 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:53:17.875231 1437114 cri.go:89] found id: ""
	I1209 05:53:17.875314 1437114 logs.go:282] 0 containers: []
	W1209 05:53:17.875337 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:53:17.875359 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:53:17.875488 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:53:17.901146 1437114 cri.go:89] found id: ""
	I1209 05:53:17.901169 1437114 logs.go:282] 0 containers: []
	W1209 05:53:17.901178 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:53:17.901207 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:53:17.901291 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:53:17.924362 1437114 cri.go:89] found id: ""
	I1209 05:53:17.924386 1437114 logs.go:282] 0 containers: []
	W1209 05:53:17.924395 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:53:17.924404 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:53:17.924415 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:53:17.987361 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:53:17.987403 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:53:18.004290 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:53:18.004323 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:53:18.072148 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:53:18.062877    5000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:18.063667    5000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:18.065532    5000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:18.066146    5000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:18.067899    5000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:53:18.062877    5000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:18.063667    5000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:18.065532    5000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:18.066146    5000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:18.067899    5000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:53:18.072181 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:53:18.072194 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:53:18.098033 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:53:18.098071 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:53:20.625561 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:53:20.635963 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:53:20.636053 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:53:20.659961 1437114 cri.go:89] found id: ""
	I1209 05:53:20.659984 1437114 logs.go:282] 0 containers: []
	W1209 05:53:20.659994 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:53:20.660000 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:53:20.660075 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:53:20.690085 1437114 cri.go:89] found id: ""
	I1209 05:53:20.690119 1437114 logs.go:282] 0 containers: []
	W1209 05:53:20.690128 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:53:20.690134 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:53:20.690199 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:53:20.722202 1437114 cri.go:89] found id: ""
	I1209 05:53:20.722238 1437114 logs.go:282] 0 containers: []
	W1209 05:53:20.722247 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:53:20.722254 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:53:20.722319 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:53:20.754033 1437114 cri.go:89] found id: ""
	I1209 05:53:20.754057 1437114 logs.go:282] 0 containers: []
	W1209 05:53:20.754066 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:53:20.754073 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:53:20.754157 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:53:20.778306 1437114 cri.go:89] found id: ""
	I1209 05:53:20.778332 1437114 logs.go:282] 0 containers: []
	W1209 05:53:20.778341 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:53:20.778349 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:53:20.778427 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:53:20.802477 1437114 cri.go:89] found id: ""
	I1209 05:53:20.802501 1437114 logs.go:282] 0 containers: []
	W1209 05:53:20.802510 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:53:20.802516 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:53:20.802605 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:53:20.833205 1437114 cri.go:89] found id: ""
	I1209 05:53:20.833231 1437114 logs.go:282] 0 containers: []
	W1209 05:53:20.833239 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:53:20.833246 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:53:20.833310 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:53:20.858107 1437114 cri.go:89] found id: ""
	I1209 05:53:20.858172 1437114 logs.go:282] 0 containers: []
	W1209 05:53:20.858188 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:53:20.858198 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:53:20.858209 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:53:20.914050 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:53:20.914088 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:53:20.930297 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:53:20.930326 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:53:21.009735 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:53:20.998811    5112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:20.999637    5112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:21.001322    5112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:21.001871    5112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:21.003770    5112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:53:20.998811    5112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:20.999637    5112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:21.001322    5112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:21.001871    5112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:21.003770    5112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:53:21.009759 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:53:21.009772 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:53:21.035653 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:53:21.035687 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:53:23.563248 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:53:23.574010 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:53:23.574087 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:53:23.603557 1437114 cri.go:89] found id: ""
	I1209 05:53:23.603583 1437114 logs.go:282] 0 containers: []
	W1209 05:53:23.603593 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:53:23.603599 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:53:23.603658 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:53:23.629927 1437114 cri.go:89] found id: ""
	I1209 05:53:23.629953 1437114 logs.go:282] 0 containers: []
	W1209 05:53:23.629961 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:53:23.629967 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:53:23.630029 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:53:23.654017 1437114 cri.go:89] found id: ""
	I1209 05:53:23.654042 1437114 logs.go:282] 0 containers: []
	W1209 05:53:23.654050 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:53:23.654057 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:53:23.654114 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:53:23.681104 1437114 cri.go:89] found id: ""
	I1209 05:53:23.681126 1437114 logs.go:282] 0 containers: []
	W1209 05:53:23.681134 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:53:23.681140 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:53:23.681210 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:53:23.717733 1437114 cri.go:89] found id: ""
	I1209 05:53:23.717754 1437114 logs.go:282] 0 containers: []
	W1209 05:53:23.717763 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:53:23.717769 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:53:23.717826 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:53:23.746697 1437114 cri.go:89] found id: ""
	I1209 05:53:23.746718 1437114 logs.go:282] 0 containers: []
	W1209 05:53:23.746727 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:53:23.746734 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:53:23.746791 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:53:23.771013 1437114 cri.go:89] found id: ""
	I1209 05:53:23.771035 1437114 logs.go:282] 0 containers: []
	W1209 05:53:23.771043 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:53:23.771049 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:53:23.771110 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:53:23.797671 1437114 cri.go:89] found id: ""
	I1209 05:53:23.797695 1437114 logs.go:282] 0 containers: []
	W1209 05:53:23.797705 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:53:23.797714 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:53:23.797727 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:53:23.863004 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:53:23.854866    5216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:23.855647    5216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:23.857241    5216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:23.857752    5216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:23.859306    5216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:53:23.854866    5216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:23.855647    5216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:23.857241    5216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:23.857752    5216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:23.859306    5216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:53:23.863025 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:53:23.863039 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:53:23.888849 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:53:23.888886 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:53:23.918103 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:53:23.918129 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:53:23.981103 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:53:23.981139 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:53:26.502565 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:53:26.513114 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:53:26.513204 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:53:26.536286 1437114 cri.go:89] found id: ""
	I1209 05:53:26.536352 1437114 logs.go:282] 0 containers: []
	W1209 05:53:26.536366 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:53:26.536373 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:53:26.536448 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:53:26.567137 1437114 cri.go:89] found id: ""
	I1209 05:53:26.567165 1437114 logs.go:282] 0 containers: []
	W1209 05:53:26.567174 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:53:26.567181 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:53:26.567255 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:53:26.593992 1437114 cri.go:89] found id: ""
	I1209 05:53:26.594018 1437114 logs.go:282] 0 containers: []
	W1209 05:53:26.594027 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:53:26.594033 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:53:26.594112 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:53:26.622318 1437114 cri.go:89] found id: ""
	I1209 05:53:26.622341 1437114 logs.go:282] 0 containers: []
	W1209 05:53:26.622349 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:53:26.622356 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:53:26.622436 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:53:26.647615 1437114 cri.go:89] found id: ""
	I1209 05:53:26.647689 1437114 logs.go:282] 0 containers: []
	W1209 05:53:26.647724 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:53:26.647744 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:53:26.647837 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:53:26.672100 1437114 cri.go:89] found id: ""
	I1209 05:53:26.672174 1437114 logs.go:282] 0 containers: []
	W1209 05:53:26.672189 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:53:26.672197 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:53:26.672268 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:53:26.702289 1437114 cri.go:89] found id: ""
	I1209 05:53:26.702322 1437114 logs.go:282] 0 containers: []
	W1209 05:53:26.702331 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:53:26.702355 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:53:26.702438 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:53:26.732737 1437114 cri.go:89] found id: ""
	I1209 05:53:26.732807 1437114 logs.go:282] 0 containers: []
	W1209 05:53:26.732831 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:53:26.732855 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:53:26.732894 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:53:26.749702 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:53:26.749778 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:53:26.813476 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:53:26.805499    5331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:26.805968    5331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:26.807499    5331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:26.807884    5331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:26.809521    5331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:53:26.805499    5331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:26.805968    5331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:26.807499    5331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:26.807884    5331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:26.809521    5331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:53:26.813510 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:53:26.813524 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:53:26.839545 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:53:26.839583 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:53:26.866441 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:53:26.866469 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:53:29.424166 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:53:29.435921 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:53:29.435993 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:53:29.462038 1437114 cri.go:89] found id: ""
	I1209 05:53:29.462060 1437114 logs.go:282] 0 containers: []
	W1209 05:53:29.462068 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:53:29.462074 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:53:29.462134 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:53:29.485671 1437114 cri.go:89] found id: ""
	I1209 05:53:29.485695 1437114 logs.go:282] 0 containers: []
	W1209 05:53:29.485704 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:53:29.485710 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:53:29.485765 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:53:29.508799 1437114 cri.go:89] found id: ""
	I1209 05:53:29.508829 1437114 logs.go:282] 0 containers: []
	W1209 05:53:29.508838 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:53:29.508844 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:53:29.508910 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:53:29.533027 1437114 cri.go:89] found id: ""
	I1209 05:53:29.533052 1437114 logs.go:282] 0 containers: []
	W1209 05:53:29.533060 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:53:29.533066 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:53:29.533151 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:53:29.565784 1437114 cri.go:89] found id: ""
	I1209 05:53:29.565811 1437114 logs.go:282] 0 containers: []
	W1209 05:53:29.565819 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:53:29.565825 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:53:29.565882 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:53:29.590917 1437114 cri.go:89] found id: ""
	I1209 05:53:29.590943 1437114 logs.go:282] 0 containers: []
	W1209 05:53:29.590951 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:53:29.590957 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:53:29.591014 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:53:29.618282 1437114 cri.go:89] found id: ""
	I1209 05:53:29.618307 1437114 logs.go:282] 0 containers: []
	W1209 05:53:29.618316 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:53:29.618322 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:53:29.618381 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:53:29.646902 1437114 cri.go:89] found id: ""
	I1209 05:53:29.646936 1437114 logs.go:282] 0 containers: []
	W1209 05:53:29.646946 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:53:29.646955 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:53:29.646973 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:53:29.707743 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:53:29.707828 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:53:29.724421 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:53:29.724499 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:53:29.794074 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:53:29.785906    5445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:29.786405    5445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:29.787873    5445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:29.788573    5445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:29.790227    5445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:53:29.785906    5445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:29.786405    5445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:29.787873    5445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:29.788573    5445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:29.790227    5445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:53:29.794139 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:53:29.794180 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:53:29.820222 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:53:29.820259 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:53:32.350724 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:53:32.361228 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:53:32.361300 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:53:32.389541 1437114 cri.go:89] found id: ""
	I1209 05:53:32.389564 1437114 logs.go:282] 0 containers: []
	W1209 05:53:32.389572 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:53:32.389578 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:53:32.389637 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:53:32.412985 1437114 cri.go:89] found id: ""
	I1209 05:53:32.413008 1437114 logs.go:282] 0 containers: []
	W1209 05:53:32.413017 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:53:32.413023 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:53:32.413100 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:53:32.436603 1437114 cri.go:89] found id: ""
	I1209 05:53:32.436628 1437114 logs.go:282] 0 containers: []
	W1209 05:53:32.436637 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:53:32.436644 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:53:32.436703 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:53:32.461975 1437114 cri.go:89] found id: ""
	I1209 05:53:32.462039 1437114 logs.go:282] 0 containers: []
	W1209 05:53:32.462053 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:53:32.462060 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:53:32.462122 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:53:32.485536 1437114 cri.go:89] found id: ""
	I1209 05:53:32.485560 1437114 logs.go:282] 0 containers: []
	W1209 05:53:32.485568 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:53:32.485574 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:53:32.485633 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:53:32.509130 1437114 cri.go:89] found id: ""
	I1209 05:53:32.509159 1437114 logs.go:282] 0 containers: []
	W1209 05:53:32.509168 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:53:32.509175 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:53:32.509253 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:53:32.532336 1437114 cri.go:89] found id: ""
	I1209 05:53:32.532366 1437114 logs.go:282] 0 containers: []
	W1209 05:53:32.532374 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:53:32.532381 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:53:32.532465 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:53:32.556282 1437114 cri.go:89] found id: ""
	I1209 05:53:32.556319 1437114 logs.go:282] 0 containers: []
	W1209 05:53:32.556329 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:53:32.556338 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:53:32.556352 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:53:32.572109 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:53:32.572183 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:53:32.633108 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:53:32.624780    5553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:32.625448    5553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:32.627074    5553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:32.627615    5553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:32.629220    5553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:53:32.624780    5553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:32.625448    5553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:32.627074    5553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:32.627615    5553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:32.629220    5553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:53:32.633141 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:53:32.633155 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:53:32.662184 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:53:32.662225 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:53:32.702034 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:53:32.702063 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:53:35.266899 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:53:35.277229 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:53:35.277296 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:53:35.300790 1437114 cri.go:89] found id: ""
	I1209 05:53:35.300814 1437114 logs.go:282] 0 containers: []
	W1209 05:53:35.300823 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:53:35.300830 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:53:35.300892 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:53:35.325182 1437114 cri.go:89] found id: ""
	I1209 05:53:35.325204 1437114 logs.go:282] 0 containers: []
	W1209 05:53:35.325212 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:53:35.325218 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:53:35.325280 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:53:35.353701 1437114 cri.go:89] found id: ""
	I1209 05:53:35.353727 1437114 logs.go:282] 0 containers: []
	W1209 05:53:35.353735 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:53:35.353741 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:53:35.353802 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:53:35.377248 1437114 cri.go:89] found id: ""
	I1209 05:53:35.377272 1437114 logs.go:282] 0 containers: []
	W1209 05:53:35.377281 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:53:35.377288 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:53:35.377347 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:53:35.401542 1437114 cri.go:89] found id: ""
	I1209 05:53:35.401568 1437114 logs.go:282] 0 containers: []
	W1209 05:53:35.401577 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:53:35.401584 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:53:35.401663 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:53:35.426460 1437114 cri.go:89] found id: ""
	I1209 05:53:35.426488 1437114 logs.go:282] 0 containers: []
	W1209 05:53:35.426497 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:53:35.426503 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:53:35.426561 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:53:35.454120 1437114 cri.go:89] found id: ""
	I1209 05:53:35.454145 1437114 logs.go:282] 0 containers: []
	W1209 05:53:35.454154 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:53:35.454160 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:53:35.454217 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:53:35.478639 1437114 cri.go:89] found id: ""
	I1209 05:53:35.478664 1437114 logs.go:282] 0 containers: []
	W1209 05:53:35.478673 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:53:35.478681 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:53:35.478692 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:53:35.504448 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:53:35.504487 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:53:35.533724 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:53:35.533751 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:53:35.589526 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:53:35.589560 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:53:35.605319 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:53:35.605345 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:53:35.676318 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:53:35.668651    5679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:35.669162    5679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:35.670613    5679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:35.671063    5679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:35.672483    5679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:53:35.668651    5679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:35.669162    5679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:35.670613    5679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:35.671063    5679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:35.672483    5679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:53:38.177618 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:53:38.191936 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:53:38.192007 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:53:38.225078 1437114 cri.go:89] found id: ""
	I1209 05:53:38.225117 1437114 logs.go:282] 0 containers: []
	W1209 05:53:38.225126 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:53:38.225133 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:53:38.225204 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:53:38.257246 1437114 cri.go:89] found id: ""
	I1209 05:53:38.257272 1437114 logs.go:282] 0 containers: []
	W1209 05:53:38.257281 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:53:38.257286 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:53:38.257350 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:53:38.286060 1437114 cri.go:89] found id: ""
	I1209 05:53:38.286083 1437114 logs.go:282] 0 containers: []
	W1209 05:53:38.286091 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:53:38.286097 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:53:38.286158 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:53:38.315924 1437114 cri.go:89] found id: ""
	I1209 05:53:38.315989 1437114 logs.go:282] 0 containers: []
	W1209 05:53:38.316050 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:53:38.316081 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:53:38.316148 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:53:38.340319 1437114 cri.go:89] found id: ""
	I1209 05:53:38.340348 1437114 logs.go:282] 0 containers: []
	W1209 05:53:38.340357 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:53:38.340363 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:53:38.340424 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:53:38.365184 1437114 cri.go:89] found id: ""
	I1209 05:53:38.365220 1437114 logs.go:282] 0 containers: []
	W1209 05:53:38.365229 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:53:38.365235 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:53:38.365307 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:53:38.389641 1437114 cri.go:89] found id: ""
	I1209 05:53:38.389720 1437114 logs.go:282] 0 containers: []
	W1209 05:53:38.389744 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:53:38.389759 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:53:38.389832 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:53:38.420280 1437114 cri.go:89] found id: ""
	I1209 05:53:38.420306 1437114 logs.go:282] 0 containers: []
	W1209 05:53:38.420315 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:53:38.420324 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:53:38.420353 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:53:38.476252 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:53:38.476288 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:53:38.492393 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:53:38.492472 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:53:38.557826 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:53:38.549594    5780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:38.550283    5780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:38.551905    5780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:38.552451    5780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:38.553989    5780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:53:38.549594    5780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:38.550283    5780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:38.551905    5780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:38.552451    5780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:38.553989    5780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:53:38.557849 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:53:38.557862 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:53:38.583171 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:53:38.583206 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:53:41.110406 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:53:41.120474 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:53:41.120545 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:53:41.145006 1437114 cri.go:89] found id: ""
	I1209 05:53:41.145030 1437114 logs.go:282] 0 containers: []
	W1209 05:53:41.145038 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:53:41.145044 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:53:41.145100 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:53:41.168892 1437114 cri.go:89] found id: ""
	I1209 05:53:41.168917 1437114 logs.go:282] 0 containers: []
	W1209 05:53:41.168925 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:53:41.168932 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:53:41.168989 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:53:41.206601 1437114 cri.go:89] found id: ""
	I1209 05:53:41.206630 1437114 logs.go:282] 0 containers: []
	W1209 05:53:41.206641 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:53:41.206653 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:53:41.206721 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:53:41.247172 1437114 cri.go:89] found id: ""
	I1209 05:53:41.247204 1437114 logs.go:282] 0 containers: []
	W1209 05:53:41.247212 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:53:41.247219 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:53:41.247276 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:53:41.271589 1437114 cri.go:89] found id: ""
	I1209 05:53:41.271613 1437114 logs.go:282] 0 containers: []
	W1209 05:53:41.271621 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:53:41.271628 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:53:41.271714 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:53:41.298007 1437114 cri.go:89] found id: ""
	I1209 05:53:41.298032 1437114 logs.go:282] 0 containers: []
	W1209 05:53:41.298041 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:53:41.298047 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:53:41.298105 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:53:41.325987 1437114 cri.go:89] found id: ""
	I1209 05:53:41.326010 1437114 logs.go:282] 0 containers: []
	W1209 05:53:41.326025 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:53:41.326050 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:53:41.326131 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:53:41.351424 1437114 cri.go:89] found id: ""
	I1209 05:53:41.351449 1437114 logs.go:282] 0 containers: []
	W1209 05:53:41.351457 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:53:41.351466 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:53:41.351476 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:53:41.376872 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:53:41.376906 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:53:41.405296 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:53:41.405322 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:53:41.461131 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:53:41.461167 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:53:41.477891 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:53:41.477920 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:53:41.546568 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:53:41.537827    5907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:41.538521    5907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:41.540212    5907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:41.540814    5907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:41.542724    5907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:53:41.537827    5907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:41.538521    5907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:41.540212    5907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:41.540814    5907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:41.542724    5907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:53:44.046855 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:53:44.058136 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:53:44.058209 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:53:44.086287 1437114 cri.go:89] found id: ""
	I1209 05:53:44.086311 1437114 logs.go:282] 0 containers: []
	W1209 05:53:44.086320 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:53:44.086326 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:53:44.086390 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:53:44.110388 1437114 cri.go:89] found id: ""
	I1209 05:53:44.110411 1437114 logs.go:282] 0 containers: []
	W1209 05:53:44.110419 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:53:44.110425 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:53:44.110481 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:53:44.134842 1437114 cri.go:89] found id: ""
	I1209 05:53:44.134864 1437114 logs.go:282] 0 containers: []
	W1209 05:53:44.134873 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:53:44.134879 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:53:44.134936 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:53:44.161691 1437114 cri.go:89] found id: ""
	I1209 05:53:44.161716 1437114 logs.go:282] 0 containers: []
	W1209 05:53:44.161725 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:53:44.161732 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:53:44.161789 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:53:44.195302 1437114 cri.go:89] found id: ""
	I1209 05:53:44.195326 1437114 logs.go:282] 0 containers: []
	W1209 05:53:44.195335 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:53:44.195341 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:53:44.195408 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:53:44.225882 1437114 cri.go:89] found id: ""
	I1209 05:53:44.225907 1437114 logs.go:282] 0 containers: []
	W1209 05:53:44.225916 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:53:44.225922 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:53:44.225981 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:53:44.253610 1437114 cri.go:89] found id: ""
	I1209 05:53:44.253636 1437114 logs.go:282] 0 containers: []
	W1209 05:53:44.253645 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:53:44.253655 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:53:44.253734 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:53:44.281815 1437114 cri.go:89] found id: ""
	I1209 05:53:44.281840 1437114 logs.go:282] 0 containers: []
	W1209 05:53:44.281848 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:53:44.281857 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:53:44.281868 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:53:44.339663 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:53:44.339702 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:53:44.355859 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:53:44.355938 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:53:44.429444 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:53:44.421835    6007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:44.422435    6007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:44.423949    6007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:44.424455    6007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:44.425745    6007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:53:44.421835    6007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:44.422435    6007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:44.423949    6007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:44.424455    6007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:44.425745    6007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:53:44.429466 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:53:44.429483 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:53:44.455230 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:53:44.455267 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:53:46.982212 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:53:46.993498 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:53:46.993587 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:53:47.023958 1437114 cri.go:89] found id: ""
	I1209 05:53:47.023982 1437114 logs.go:282] 0 containers: []
	W1209 05:53:47.023991 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:53:47.023997 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:53:47.024069 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:53:47.048879 1437114 cri.go:89] found id: ""
	I1209 05:53:47.048901 1437114 logs.go:282] 0 containers: []
	W1209 05:53:47.048910 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:53:47.048916 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:53:47.048983 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:53:47.073853 1437114 cri.go:89] found id: ""
	I1209 05:53:47.073878 1437114 logs.go:282] 0 containers: []
	W1209 05:53:47.073886 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:53:47.073894 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:53:47.073955 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:53:47.096844 1437114 cri.go:89] found id: ""
	I1209 05:53:47.096869 1437114 logs.go:282] 0 containers: []
	W1209 05:53:47.096877 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:53:47.096884 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:53:47.096945 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:53:47.120160 1437114 cri.go:89] found id: ""
	I1209 05:53:47.120185 1437114 logs.go:282] 0 containers: []
	W1209 05:53:47.120194 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:53:47.120200 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:53:47.120261 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:53:47.145073 1437114 cri.go:89] found id: ""
	I1209 05:53:47.145139 1437114 logs.go:282] 0 containers: []
	W1209 05:53:47.145155 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:53:47.145163 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:53:47.145226 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:53:47.168839 1437114 cri.go:89] found id: ""
	I1209 05:53:47.168862 1437114 logs.go:282] 0 containers: []
	W1209 05:53:47.168870 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:53:47.168878 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:53:47.168956 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:53:47.200241 1437114 cri.go:89] found id: ""
	I1209 05:53:47.200264 1437114 logs.go:282] 0 containers: []
	W1209 05:53:47.200272 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:53:47.200282 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:53:47.200311 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:53:47.261748 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:53:47.261783 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:53:47.277688 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:53:47.277718 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:53:47.342796 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:53:47.334710    6118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:47.335374    6118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:47.336895    6118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:47.337477    6118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:47.338953    6118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:53:47.334710    6118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:47.335374    6118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:47.336895    6118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:47.337477    6118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:47.338953    6118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:53:47.342859 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:53:47.342886 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:53:47.367837 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:53:47.367872 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:53:49.896241 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:53:49.908838 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:53:49.908918 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:53:49.942190 1437114 cri.go:89] found id: ""
	I1209 05:53:49.942212 1437114 logs.go:282] 0 containers: []
	W1209 05:53:49.942221 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:53:49.942226 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:53:49.942387 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:53:49.977371 1437114 cri.go:89] found id: ""
	I1209 05:53:49.977393 1437114 logs.go:282] 0 containers: []
	W1209 05:53:49.977401 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:53:49.977408 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:53:49.977468 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:53:50.002223 1437114 cri.go:89] found id: ""
	I1209 05:53:50.002247 1437114 logs.go:282] 0 containers: []
	W1209 05:53:50.002255 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:53:50.002262 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:53:50.002326 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:53:50.032431 1437114 cri.go:89] found id: ""
	I1209 05:53:50.032458 1437114 logs.go:282] 0 containers: []
	W1209 05:53:50.032467 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:53:50.032474 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:53:50.032535 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:53:50.062289 1437114 cri.go:89] found id: ""
	I1209 05:53:50.062314 1437114 logs.go:282] 0 containers: []
	W1209 05:53:50.062323 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:53:50.062329 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:53:50.062418 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:53:50.088271 1437114 cri.go:89] found id: ""
	I1209 05:53:50.088298 1437114 logs.go:282] 0 containers: []
	W1209 05:53:50.088307 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:53:50.088313 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:53:50.088382 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:53:50.114549 1437114 cri.go:89] found id: ""
	I1209 05:53:50.114629 1437114 logs.go:282] 0 containers: []
	W1209 05:53:50.115120 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:53:50.115137 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:53:50.115209 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:53:50.141196 1437114 cri.go:89] found id: ""
	I1209 05:53:50.141276 1437114 logs.go:282] 0 containers: []
	W1209 05:53:50.141298 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:53:50.141318 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:53:50.141353 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:53:50.198211 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:53:50.198284 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:53:50.215943 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:53:50.216047 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:53:50.281793 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:53:50.272885    6231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:50.273579    6231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:50.275295    6231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:50.275902    6231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:50.277606    6231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:53:50.272885    6231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:50.273579    6231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:50.275295    6231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:50.275902    6231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:50.277606    6231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:53:50.281814 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:53:50.281826 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:53:50.308006 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:53:50.308052 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:53:52.837556 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:53:52.848136 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:53:52.848208 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:53:52.872274 1437114 cri.go:89] found id: ""
	I1209 05:53:52.872302 1437114 logs.go:282] 0 containers: []
	W1209 05:53:52.872310 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:53:52.872317 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:53:52.872375 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:53:52.899101 1437114 cri.go:89] found id: ""
	I1209 05:53:52.899125 1437114 logs.go:282] 0 containers: []
	W1209 05:53:52.899134 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:53:52.899140 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:53:52.899199 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:53:52.926800 1437114 cri.go:89] found id: ""
	I1209 05:53:52.926825 1437114 logs.go:282] 0 containers: []
	W1209 05:53:52.926834 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:53:52.926840 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:53:52.926900 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:53:52.962012 1437114 cri.go:89] found id: ""
	I1209 05:53:52.962037 1437114 logs.go:282] 0 containers: []
	W1209 05:53:52.962055 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:53:52.962063 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:53:52.962140 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:53:52.996310 1437114 cri.go:89] found id: ""
	I1209 05:53:52.996336 1437114 logs.go:282] 0 containers: []
	W1209 05:53:52.996345 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:53:52.996351 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:53:52.996410 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:53:53.031535 1437114 cri.go:89] found id: ""
	I1209 05:53:53.031563 1437114 logs.go:282] 0 containers: []
	W1209 05:53:53.031572 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:53:53.031578 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:53:53.031637 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:53:53.059974 1437114 cri.go:89] found id: ""
	I1209 05:53:53.060004 1437114 logs.go:282] 0 containers: []
	W1209 05:53:53.060030 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:53:53.060038 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:53:53.060096 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:53:53.085290 1437114 cri.go:89] found id: ""
	I1209 05:53:53.085356 1437114 logs.go:282] 0 containers: []
	W1209 05:53:53.085386 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:53:53.085403 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:53:53.085415 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:53:53.142442 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:53:53.142477 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:53:53.159141 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:53:53.159169 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:53:53.237761 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:53:53.229474    6343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:53.230237    6343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:53.231874    6343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:53.232214    6343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:53.233652    6343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:53:53.229474    6343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:53.230237    6343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:53.231874    6343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:53.232214    6343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:53.233652    6343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:53:53.237779 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:53:53.237791 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:53:53.265602 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:53:53.265679 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:53:55.800068 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:53:55.810556 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:53:55.810627 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:53:55.836257 1437114 cri.go:89] found id: ""
	I1209 05:53:55.836280 1437114 logs.go:282] 0 containers: []
	W1209 05:53:55.836289 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:53:55.836295 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:53:55.836352 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:53:55.861759 1437114 cri.go:89] found id: ""
	I1209 05:53:55.861783 1437114 logs.go:282] 0 containers: []
	W1209 05:53:55.861792 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:53:55.861798 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:53:55.861865 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:53:55.886950 1437114 cri.go:89] found id: ""
	I1209 05:53:55.886982 1437114 logs.go:282] 0 containers: []
	W1209 05:53:55.886991 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:53:55.886997 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:53:55.887072 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:53:55.912055 1437114 cri.go:89] found id: ""
	I1209 05:53:55.912081 1437114 logs.go:282] 0 containers: []
	W1209 05:53:55.912089 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:53:55.912096 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:53:55.912162 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:53:55.949365 1437114 cri.go:89] found id: ""
	I1209 05:53:55.949431 1437114 logs.go:282] 0 containers: []
	W1209 05:53:55.949455 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:53:55.949471 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:53:55.949545 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:53:55.977916 1437114 cri.go:89] found id: ""
	I1209 05:53:55.977938 1437114 logs.go:282] 0 containers: []
	W1209 05:53:55.977946 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:53:55.977953 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:53:55.978040 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:53:56.013033 1437114 cri.go:89] found id: ""
	I1209 05:53:56.013070 1437114 logs.go:282] 0 containers: []
	W1209 05:53:56.013079 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:53:56.013086 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:53:56.013177 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:53:56.039563 1437114 cri.go:89] found id: ""
	I1209 05:53:56.039610 1437114 logs.go:282] 0 containers: []
	W1209 05:53:56.039620 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:53:56.039629 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:53:56.039641 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:53:56.065976 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:53:56.066014 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:53:56.097703 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:53:56.097732 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:53:56.156555 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:53:56.156594 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:53:56.172549 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:53:56.172576 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:53:56.257220 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:53:56.248866    6469 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:56.249574    6469 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:56.251225    6469 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:56.251719    6469 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:56.253344    6469 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:53:56.248866    6469 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:56.249574    6469 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:56.251225    6469 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:56.251719    6469 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:56.253344    6469 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:53:58.758071 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:53:58.768718 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:53:58.768796 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:53:58.793984 1437114 cri.go:89] found id: ""
	I1209 05:53:58.794007 1437114 logs.go:282] 0 containers: []
	W1209 05:53:58.794015 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:53:58.794021 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:53:58.794078 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:53:58.818550 1437114 cri.go:89] found id: ""
	I1209 05:53:58.818574 1437114 logs.go:282] 0 containers: []
	W1209 05:53:58.818582 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:53:58.818589 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:53:58.818648 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:53:58.843617 1437114 cri.go:89] found id: ""
	I1209 05:53:58.843696 1437114 logs.go:282] 0 containers: []
	W1209 05:53:58.843719 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:53:58.843738 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:53:58.843809 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:53:58.868732 1437114 cri.go:89] found id: ""
	I1209 05:53:58.868754 1437114 logs.go:282] 0 containers: []
	W1209 05:53:58.868763 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:53:58.868769 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:53:58.868823 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:53:58.892930 1437114 cri.go:89] found id: ""
	I1209 05:53:58.892953 1437114 logs.go:282] 0 containers: []
	W1209 05:53:58.892961 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:53:58.892968 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:53:58.893027 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:53:58.917833 1437114 cri.go:89] found id: ""
	I1209 05:53:58.917857 1437114 logs.go:282] 0 containers: []
	W1209 05:53:58.917865 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:53:58.917872 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:53:58.917933 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:53:58.965955 1437114 cri.go:89] found id: ""
	I1209 05:53:58.965982 1437114 logs.go:282] 0 containers: []
	W1209 05:53:58.965990 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:53:58.965996 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:53:58.966054 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:53:58.999708 1437114 cri.go:89] found id: ""
	I1209 05:53:58.999736 1437114 logs.go:282] 0 containers: []
	W1209 05:53:58.999744 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:53:58.999754 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:53:58.999764 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:53:59.065757 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:53:59.057189    6561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:59.058037    6561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:59.059660    6561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:59.060056    6561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:59.061679    6561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:53:59.057189    6561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:59.058037    6561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:59.059660    6561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:59.060056    6561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:59.061679    6561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:53:59.065776 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:53:59.065788 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:53:59.090908 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:53:59.090944 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:53:59.118148 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:53:59.118180 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:53:59.175439 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:53:59.175476 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:54:01.697656 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:54:01.712348 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:54:01.712424 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:54:01.743582 1437114 cri.go:89] found id: ""
	I1209 05:54:01.743609 1437114 logs.go:282] 0 containers: []
	W1209 05:54:01.743618 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:54:01.743625 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:54:01.743688 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:54:01.769801 1437114 cri.go:89] found id: ""
	I1209 05:54:01.769825 1437114 logs.go:282] 0 containers: []
	W1209 05:54:01.769834 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:54:01.769840 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:54:01.769896 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:54:01.798274 1437114 cri.go:89] found id: ""
	I1209 05:54:01.798299 1437114 logs.go:282] 0 containers: []
	W1209 05:54:01.798308 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:54:01.798314 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:54:01.798375 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:54:01.827182 1437114 cri.go:89] found id: ""
	I1209 05:54:01.827207 1437114 logs.go:282] 0 containers: []
	W1209 05:54:01.827215 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:54:01.827222 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:54:01.827284 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:54:01.856540 1437114 cri.go:89] found id: ""
	I1209 05:54:01.856564 1437114 logs.go:282] 0 containers: []
	W1209 05:54:01.856573 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:54:01.856579 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:54:01.856659 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:54:01.885694 1437114 cri.go:89] found id: ""
	I1209 05:54:01.885719 1437114 logs.go:282] 0 containers: []
	W1209 05:54:01.885728 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:54:01.885734 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:54:01.885808 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:54:01.915290 1437114 cri.go:89] found id: ""
	I1209 05:54:01.915318 1437114 logs.go:282] 0 containers: []
	W1209 05:54:01.915327 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:54:01.915333 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:54:01.915392 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:54:01.950840 1437114 cri.go:89] found id: ""
	I1209 05:54:01.950869 1437114 logs.go:282] 0 containers: []
	W1209 05:54:01.950878 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:54:01.950888 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:54:01.950899 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:54:02.014414 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:54:02.014453 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:54:02.032051 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:54:02.032135 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:54:02.095629 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:54:02.087393    6683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:02.088084    6683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:02.089580    6683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:02.090087    6683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:02.091647    6683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:54:02.087393    6683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:02.088084    6683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:02.089580    6683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:02.090087    6683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:02.091647    6683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:54:02.095650 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:54:02.095663 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:54:02.122511 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:54:02.122550 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:54:04.650297 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:54:04.660872 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:54:04.660943 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:54:04.687789 1437114 cri.go:89] found id: ""
	I1209 05:54:04.687819 1437114 logs.go:282] 0 containers: []
	W1209 05:54:04.687827 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:54:04.687833 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:54:04.687902 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:54:04.711324 1437114 cri.go:89] found id: ""
	I1209 05:54:04.711349 1437114 logs.go:282] 0 containers: []
	W1209 05:54:04.711357 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:54:04.711364 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:54:04.711423 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:54:04.737863 1437114 cri.go:89] found id: ""
	I1209 05:54:04.737888 1437114 logs.go:282] 0 containers: []
	W1209 05:54:04.737896 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:54:04.737902 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:54:04.737978 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:54:04.762117 1437114 cri.go:89] found id: ""
	I1209 05:54:04.762143 1437114 logs.go:282] 0 containers: []
	W1209 05:54:04.762153 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:54:04.762160 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:54:04.762242 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:54:04.786158 1437114 cri.go:89] found id: ""
	I1209 05:54:04.786181 1437114 logs.go:282] 0 containers: []
	W1209 05:54:04.786189 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:54:04.786195 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:54:04.786252 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:54:04.810657 1437114 cri.go:89] found id: ""
	I1209 05:54:04.810727 1437114 logs.go:282] 0 containers: []
	W1209 05:54:04.810758 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:54:04.810777 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:54:04.810865 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:54:04.835039 1437114 cri.go:89] found id: ""
	I1209 05:54:04.835061 1437114 logs.go:282] 0 containers: []
	W1209 05:54:04.835069 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:54:04.835075 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:54:04.835132 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:54:04.863664 1437114 cri.go:89] found id: ""
	I1209 05:54:04.863691 1437114 logs.go:282] 0 containers: []
	W1209 05:54:04.863704 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:54:04.863713 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:54:04.863724 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:54:04.889846 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:54:04.889882 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:54:04.919060 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:54:04.919086 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:54:04.995975 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:54:04.996070 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:54:05.020220 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:54:05.020254 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:54:05.088696 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:54:05.080290    6811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:05.080797    6811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:05.082535    6811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:05.082897    6811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:05.084443    6811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:54:05.080290    6811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:05.080797    6811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:05.082535    6811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:05.082897    6811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:05.084443    6811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:54:07.590606 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:54:07.601036 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:54:07.601107 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:54:07.626527 1437114 cri.go:89] found id: ""
	I1209 05:54:07.626550 1437114 logs.go:282] 0 containers: []
	W1209 05:54:07.626559 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:54:07.626566 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:54:07.626624 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:54:07.656166 1437114 cri.go:89] found id: ""
	I1209 05:54:07.656193 1437114 logs.go:282] 0 containers: []
	W1209 05:54:07.656201 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:54:07.656207 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:54:07.656272 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:54:07.682014 1437114 cri.go:89] found id: ""
	I1209 05:54:07.682038 1437114 logs.go:282] 0 containers: []
	W1209 05:54:07.682046 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:54:07.682052 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:54:07.682116 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:54:07.707210 1437114 cri.go:89] found id: ""
	I1209 05:54:07.707234 1437114 logs.go:282] 0 containers: []
	W1209 05:54:07.707242 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:54:07.707248 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:54:07.707332 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:54:07.731843 1437114 cri.go:89] found id: ""
	I1209 05:54:07.731868 1437114 logs.go:282] 0 containers: []
	W1209 05:54:07.731877 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:54:07.731892 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:54:07.731958 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:54:07.760321 1437114 cri.go:89] found id: ""
	I1209 05:54:07.760346 1437114 logs.go:282] 0 containers: []
	W1209 05:54:07.760354 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:54:07.760363 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:54:07.760424 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:54:07.786309 1437114 cri.go:89] found id: ""
	I1209 05:54:07.786330 1437114 logs.go:282] 0 containers: []
	W1209 05:54:07.786338 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:54:07.786350 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:54:07.786406 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:54:07.809182 1437114 cri.go:89] found id: ""
	I1209 05:54:07.809216 1437114 logs.go:282] 0 containers: []
	W1209 05:54:07.809225 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:54:07.809233 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:54:07.809244 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:54:07.839994 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:54:07.840050 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:54:07.898120 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:54:07.898152 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:54:07.914130 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:54:07.914234 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:54:08.009314 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:54:07.997479    6916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:07.998081    6916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:07.999634    6916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:08.000228    6916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:08.002087    6916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:54:07.997479    6916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:07.998081    6916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:07.999634    6916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:08.000228    6916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:08.002087    6916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:54:08.009391 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:54:08.009413 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:54:10.536185 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:54:10.547685 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:54:10.547757 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:54:10.571843 1437114 cri.go:89] found id: ""
	I1209 05:54:10.571865 1437114 logs.go:282] 0 containers: []
	W1209 05:54:10.571873 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:54:10.571879 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:54:10.571935 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:54:10.598065 1437114 cri.go:89] found id: ""
	I1209 05:54:10.598092 1437114 logs.go:282] 0 containers: []
	W1209 05:54:10.598101 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:54:10.598107 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:54:10.598165 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:54:10.623072 1437114 cri.go:89] found id: ""
	I1209 05:54:10.623098 1437114 logs.go:282] 0 containers: []
	W1209 05:54:10.623107 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:54:10.623113 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:54:10.623200 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:54:10.649781 1437114 cri.go:89] found id: ""
	I1209 05:54:10.649806 1437114 logs.go:282] 0 containers: []
	W1209 05:54:10.649823 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:54:10.649830 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:54:10.649886 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:54:10.677496 1437114 cri.go:89] found id: ""
	I1209 05:54:10.677529 1437114 logs.go:282] 0 containers: []
	W1209 05:54:10.677538 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:54:10.677544 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:54:10.677603 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:54:10.705951 1437114 cri.go:89] found id: ""
	I1209 05:54:10.705982 1437114 logs.go:282] 0 containers: []
	W1209 05:54:10.705991 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:54:10.705997 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:54:10.706062 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:54:10.730882 1437114 cri.go:89] found id: ""
	I1209 05:54:10.730957 1437114 logs.go:282] 0 containers: []
	W1209 05:54:10.730980 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:54:10.730998 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:54:10.731088 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:54:10.757722 1437114 cri.go:89] found id: ""
	I1209 05:54:10.757753 1437114 logs.go:282] 0 containers: []
	W1209 05:54:10.757761 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:54:10.757771 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:54:10.757784 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:54:10.817777 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:54:10.817812 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:54:10.834055 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:54:10.834083 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:54:10.898677 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:54:10.890728    7018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:10.891591    7018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:10.893093    7018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:10.893520    7018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:10.894977    7018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:54:10.890728    7018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:10.891591    7018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:10.893093    7018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:10.893520    7018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:10.894977    7018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:54:10.898700 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:54:10.898713 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:54:10.923656 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:54:10.923690 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:54:13.467228 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:54:13.477812 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:54:13.477886 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:54:13.503323 1437114 cri.go:89] found id: ""
	I1209 05:54:13.503351 1437114 logs.go:282] 0 containers: []
	W1209 05:54:13.503360 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:54:13.503367 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:54:13.503441 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:54:13.538282 1437114 cri.go:89] found id: ""
	I1209 05:54:13.538310 1437114 logs.go:282] 0 containers: []
	W1209 05:54:13.538318 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:54:13.538324 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:54:13.538382 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:54:13.565556 1437114 cri.go:89] found id: ""
	I1209 05:54:13.565584 1437114 logs.go:282] 0 containers: []
	W1209 05:54:13.565594 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:54:13.565600 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:54:13.565659 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:54:13.594477 1437114 cri.go:89] found id: ""
	I1209 05:54:13.594499 1437114 logs.go:282] 0 containers: []
	W1209 05:54:13.594508 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:54:13.594514 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:54:13.594575 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:54:13.618630 1437114 cri.go:89] found id: ""
	I1209 05:54:13.618651 1437114 logs.go:282] 0 containers: []
	W1209 05:54:13.618658 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:54:13.618664 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:54:13.618720 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:54:13.643760 1437114 cri.go:89] found id: ""
	I1209 05:54:13.643786 1437114 logs.go:282] 0 containers: []
	W1209 05:54:13.643795 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:54:13.643801 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:54:13.643858 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:54:13.669716 1437114 cri.go:89] found id: ""
	I1209 05:54:13.669741 1437114 logs.go:282] 0 containers: []
	W1209 05:54:13.669749 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:54:13.669756 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:54:13.669848 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:54:13.693820 1437114 cri.go:89] found id: ""
	I1209 05:54:13.693847 1437114 logs.go:282] 0 containers: []
	W1209 05:54:13.693855 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:54:13.693864 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:54:13.693875 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:54:13.750893 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:54:13.750940 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:54:13.767174 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:54:13.767247 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:54:13.834450 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:54:13.823547    7135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:13.824086    7135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:13.828520    7135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:13.828897    7135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:13.830390    7135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:54:13.823547    7135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:13.824086    7135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:13.828520    7135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:13.828897    7135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:13.830390    7135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:54:13.834476 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:54:13.834491 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:54:13.860109 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:54:13.860148 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:54:16.386616 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:54:16.396767 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:54:16.396835 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:54:16.421557 1437114 cri.go:89] found id: ""
	I1209 05:54:16.421580 1437114 logs.go:282] 0 containers: []
	W1209 05:54:16.421589 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:54:16.421595 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:54:16.421655 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:54:16.462411 1437114 cri.go:89] found id: ""
	I1209 05:54:16.462432 1437114 logs.go:282] 0 containers: []
	W1209 05:54:16.462441 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:54:16.462447 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:54:16.462505 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:54:16.493789 1437114 cri.go:89] found id: ""
	I1209 05:54:16.493811 1437114 logs.go:282] 0 containers: []
	W1209 05:54:16.493819 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:54:16.493825 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:54:16.493887 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:54:16.523482 1437114 cri.go:89] found id: ""
	I1209 05:54:16.523504 1437114 logs.go:282] 0 containers: []
	W1209 05:54:16.523513 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:54:16.523519 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:54:16.523578 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:54:16.548318 1437114 cri.go:89] found id: ""
	I1209 05:54:16.548354 1437114 logs.go:282] 0 containers: []
	W1209 05:54:16.548363 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:54:16.548386 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:54:16.548471 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:54:16.573131 1437114 cri.go:89] found id: ""
	I1209 05:54:16.573158 1437114 logs.go:282] 0 containers: []
	W1209 05:54:16.573167 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:54:16.573173 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:54:16.573233 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:54:16.596652 1437114 cri.go:89] found id: ""
	I1209 05:54:16.596680 1437114 logs.go:282] 0 containers: []
	W1209 05:54:16.596689 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:54:16.596695 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:54:16.596754 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:54:16.622109 1437114 cri.go:89] found id: ""
	I1209 05:54:16.622131 1437114 logs.go:282] 0 containers: []
	W1209 05:54:16.622139 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:54:16.622148 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:54:16.622160 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:54:16.637977 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:54:16.638014 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:54:16.701887 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:54:16.693598    7245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:16.694125    7245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:16.695778    7245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:16.696319    7245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:16.697759    7245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:54:16.693598    7245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:16.694125    7245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:16.695778    7245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:16.696319    7245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:16.697759    7245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:54:16.701914 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:54:16.701927 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:54:16.728328 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:54:16.728362 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:54:16.756551 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:54:16.756581 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:54:19.313862 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:54:19.323798 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:54:19.323881 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:54:19.348899 1437114 cri.go:89] found id: ""
	I1209 05:54:19.348924 1437114 logs.go:282] 0 containers: []
	W1209 05:54:19.348932 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:54:19.348939 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:54:19.348996 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:54:19.373133 1437114 cri.go:89] found id: ""
	I1209 05:54:19.373156 1437114 logs.go:282] 0 containers: []
	W1209 05:54:19.373164 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:54:19.373170 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:54:19.373226 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:54:19.397615 1437114 cri.go:89] found id: ""
	I1209 05:54:19.397642 1437114 logs.go:282] 0 containers: []
	W1209 05:54:19.397651 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:54:19.397657 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:54:19.397716 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:54:19.426484 1437114 cri.go:89] found id: ""
	I1209 05:54:19.426505 1437114 logs.go:282] 0 containers: []
	W1209 05:54:19.426513 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:54:19.426519 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:54:19.426575 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:54:19.454826 1437114 cri.go:89] found id: ""
	I1209 05:54:19.454852 1437114 logs.go:282] 0 containers: []
	W1209 05:54:19.454868 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:54:19.454874 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:54:19.454941 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:54:19.483800 1437114 cri.go:89] found id: ""
	I1209 05:54:19.483821 1437114 logs.go:282] 0 containers: []
	W1209 05:54:19.483829 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:54:19.483835 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:54:19.483890 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:54:19.510301 1437114 cri.go:89] found id: ""
	I1209 05:54:19.510322 1437114 logs.go:282] 0 containers: []
	W1209 05:54:19.510330 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:54:19.510336 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:54:19.510392 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:54:19.533740 1437114 cri.go:89] found id: ""
	I1209 05:54:19.533766 1437114 logs.go:282] 0 containers: []
	W1209 05:54:19.533775 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:54:19.533785 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:54:19.533797 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:54:19.590533 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:54:19.590609 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:54:19.607749 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:54:19.607831 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:54:19.670098 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:54:19.662273    7359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:19.663063    7359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:19.664591    7359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:19.664886    7359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:19.666309    7359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:54:19.662273    7359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:19.663063    7359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:19.664591    7359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:19.664886    7359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:19.666309    7359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:54:19.670121 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:54:19.670135 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:54:19.696365 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:54:19.696401 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:54:22.225234 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:54:22.235522 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:54:22.235590 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:54:22.260044 1437114 cri.go:89] found id: ""
	I1209 05:54:22.260067 1437114 logs.go:282] 0 containers: []
	W1209 05:54:22.260076 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:54:22.260082 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:54:22.260141 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:54:22.283666 1437114 cri.go:89] found id: ""
	I1209 05:54:22.283694 1437114 logs.go:282] 0 containers: []
	W1209 05:54:22.283702 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:54:22.283708 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:54:22.283764 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:54:22.307779 1437114 cri.go:89] found id: ""
	I1209 05:54:22.307812 1437114 logs.go:282] 0 containers: []
	W1209 05:54:22.307821 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:54:22.307827 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:54:22.307884 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:54:22.333595 1437114 cri.go:89] found id: ""
	I1209 05:54:22.333621 1437114 logs.go:282] 0 containers: []
	W1209 05:54:22.333629 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:54:22.333635 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:54:22.333692 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:54:22.357452 1437114 cri.go:89] found id: ""
	I1209 05:54:22.357476 1437114 logs.go:282] 0 containers: []
	W1209 05:54:22.357484 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:54:22.357490 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:54:22.357551 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:54:22.382107 1437114 cri.go:89] found id: ""
	I1209 05:54:22.382170 1437114 logs.go:282] 0 containers: []
	W1209 05:54:22.382184 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:54:22.382192 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:54:22.382251 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:54:22.406738 1437114 cri.go:89] found id: ""
	I1209 05:54:22.406770 1437114 logs.go:282] 0 containers: []
	W1209 05:54:22.406780 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:54:22.406787 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:54:22.406858 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:54:22.432967 1437114 cri.go:89] found id: ""
	I1209 05:54:22.433002 1437114 logs.go:282] 0 containers: []
	W1209 05:54:22.433011 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:54:22.433020 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:54:22.433030 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:54:22.496308 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:54:22.496347 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:54:22.513215 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:54:22.513243 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:54:22.576557 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:54:22.568457    7472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:22.569106    7472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:22.570813    7472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:22.571288    7472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:22.572769    7472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:54:22.568457    7472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:22.569106    7472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:22.570813    7472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:22.571288    7472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:22.572769    7472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:54:22.576620 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:54:22.576641 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:54:22.601775 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:54:22.601808 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:54:25.129209 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:54:25.140801 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:54:25.140875 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:54:25.167673 1437114 cri.go:89] found id: ""
	I1209 05:54:25.167699 1437114 logs.go:282] 0 containers: []
	W1209 05:54:25.167708 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:54:25.167714 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:54:25.167774 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:54:25.213289 1437114 cri.go:89] found id: ""
	I1209 05:54:25.213317 1437114 logs.go:282] 0 containers: []
	W1209 05:54:25.213326 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:54:25.213332 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:54:25.213394 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:54:25.251150 1437114 cri.go:89] found id: ""
	I1209 05:54:25.251173 1437114 logs.go:282] 0 containers: []
	W1209 05:54:25.251181 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:54:25.251187 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:54:25.251251 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:54:25.278324 1437114 cri.go:89] found id: ""
	I1209 05:54:25.278347 1437114 logs.go:282] 0 containers: []
	W1209 05:54:25.278355 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:54:25.278361 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:54:25.278426 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:54:25.305947 1437114 cri.go:89] found id: ""
	I1209 05:54:25.305968 1437114 logs.go:282] 0 containers: []
	W1209 05:54:25.305976 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:54:25.305982 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:54:25.306043 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:54:25.330741 1437114 cri.go:89] found id: ""
	I1209 05:54:25.330766 1437114 logs.go:282] 0 containers: []
	W1209 05:54:25.330774 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:54:25.330780 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:54:25.330842 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:54:25.357251 1437114 cri.go:89] found id: ""
	I1209 05:54:25.357289 1437114 logs.go:282] 0 containers: []
	W1209 05:54:25.357297 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:54:25.357303 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:54:25.357361 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:54:25.381550 1437114 cri.go:89] found id: ""
	I1209 05:54:25.381574 1437114 logs.go:282] 0 containers: []
	W1209 05:54:25.381582 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:54:25.381643 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:54:25.381661 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:54:25.407792 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:54:25.407826 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:54:25.444380 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:54:25.444411 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:54:25.508703 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:54:25.508739 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:54:25.525308 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:54:25.525335 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:54:25.590403 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:54:25.582560    7600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:25.583141    7600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:25.584775    7600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:25.585120    7600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:25.586571    7600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:54:25.582560    7600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:25.583141    7600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:25.584775    7600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:25.585120    7600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:25.586571    7600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:54:28.090673 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:54:28.101806 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:54:28.101927 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:54:28.126175 1437114 cri.go:89] found id: ""
	I1209 05:54:28.126210 1437114 logs.go:282] 0 containers: []
	W1209 05:54:28.126219 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:54:28.126225 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:54:28.126302 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:54:28.154842 1437114 cri.go:89] found id: ""
	I1209 05:54:28.154863 1437114 logs.go:282] 0 containers: []
	W1209 05:54:28.154872 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:54:28.154878 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:54:28.154936 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:54:28.181513 1437114 cri.go:89] found id: ""
	I1209 05:54:28.181536 1437114 logs.go:282] 0 containers: []
	W1209 05:54:28.181543 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:54:28.181550 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:54:28.181606 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:54:28.208958 1437114 cri.go:89] found id: ""
	I1209 05:54:28.208979 1437114 logs.go:282] 0 containers: []
	W1209 05:54:28.208987 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:54:28.208993 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:54:28.209051 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:54:28.236261 1437114 cri.go:89] found id: ""
	I1209 05:54:28.236288 1437114 logs.go:282] 0 containers: []
	W1209 05:54:28.236296 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:54:28.236308 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:54:28.236365 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:54:28.264550 1437114 cri.go:89] found id: ""
	I1209 05:54:28.264573 1437114 logs.go:282] 0 containers: []
	W1209 05:54:28.264582 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:54:28.264588 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:54:28.264645 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:54:28.288754 1437114 cri.go:89] found id: ""
	I1209 05:54:28.288779 1437114 logs.go:282] 0 containers: []
	W1209 05:54:28.288787 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:54:28.288805 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:54:28.288865 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:54:28.311894 1437114 cri.go:89] found id: ""
	I1209 05:54:28.311922 1437114 logs.go:282] 0 containers: []
	W1209 05:54:28.311931 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:54:28.311941 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:54:28.311952 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:54:28.368882 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:54:28.368916 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:54:28.385073 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:54:28.385102 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:54:28.453852 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:54:28.445585    7699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:28.446317    7699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:28.447999    7699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:28.448560    7699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:28.449990    7699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:54:28.445585    7699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:28.446317    7699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:28.447999    7699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:28.448560    7699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:28.449990    7699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:54:28.453912 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:54:28.453948 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:54:28.481464 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:54:28.481542 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:54:31.017971 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:54:31.028776 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:54:31.028848 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:54:31.059955 1437114 cri.go:89] found id: ""
	I1209 05:54:31.059979 1437114 logs.go:282] 0 containers: []
	W1209 05:54:31.059988 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:54:31.059995 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:54:31.060087 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:54:31.085360 1437114 cri.go:89] found id: ""
	I1209 05:54:31.085389 1437114 logs.go:282] 0 containers: []
	W1209 05:54:31.085398 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:54:31.085404 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:54:31.085466 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:54:31.112050 1437114 cri.go:89] found id: ""
	I1209 05:54:31.112083 1437114 logs.go:282] 0 containers: []
	W1209 05:54:31.112092 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:54:31.112100 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:54:31.112170 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:54:31.139102 1437114 cri.go:89] found id: ""
	I1209 05:54:31.139138 1437114 logs.go:282] 0 containers: []
	W1209 05:54:31.139147 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:54:31.139153 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:54:31.139223 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:54:31.166677 1437114 cri.go:89] found id: ""
	I1209 05:54:31.166710 1437114 logs.go:282] 0 containers: []
	W1209 05:54:31.166720 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:54:31.166727 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:54:31.166818 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:54:31.204582 1437114 cri.go:89] found id: ""
	I1209 05:54:31.204610 1437114 logs.go:282] 0 containers: []
	W1209 05:54:31.204619 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:54:31.204626 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:54:31.204693 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:54:31.242874 1437114 cri.go:89] found id: ""
	I1209 05:54:31.242900 1437114 logs.go:282] 0 containers: []
	W1209 05:54:31.242909 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:54:31.242916 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:54:31.242991 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:54:31.268196 1437114 cri.go:89] found id: ""
	I1209 05:54:31.268225 1437114 logs.go:282] 0 containers: []
	W1209 05:54:31.268234 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:54:31.268243 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:54:31.268254 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:54:31.293521 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:54:31.293559 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:54:31.321144 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:54:31.321175 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:54:31.378617 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:54:31.378656 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:54:31.394506 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:54:31.394533 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:54:31.467240 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:54:31.458393    7825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:31.459167    7825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:31.460831    7825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:31.461408    7825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:31.463045    7825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:54:31.458393    7825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:31.459167    7825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:31.460831    7825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:31.461408    7825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:31.463045    7825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:54:33.967506 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:54:33.977826 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:54:33.977902 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:54:34.002325 1437114 cri.go:89] found id: ""
	I1209 05:54:34.002351 1437114 logs.go:282] 0 containers: []
	W1209 05:54:34.002360 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:54:34.002367 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:54:34.002443 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:54:34.029888 1437114 cri.go:89] found id: ""
	I1209 05:54:34.029919 1437114 logs.go:282] 0 containers: []
	W1209 05:54:34.029928 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:54:34.029935 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:54:34.029996 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:54:34.058673 1437114 cri.go:89] found id: ""
	I1209 05:54:34.058698 1437114 logs.go:282] 0 containers: []
	W1209 05:54:34.058706 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:54:34.058712 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:54:34.058783 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:54:34.083346 1437114 cri.go:89] found id: ""
	I1209 05:54:34.083370 1437114 logs.go:282] 0 containers: []
	W1209 05:54:34.083379 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:54:34.083385 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:54:34.083453 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:54:34.108098 1437114 cri.go:89] found id: ""
	I1209 05:54:34.108126 1437114 logs.go:282] 0 containers: []
	W1209 05:54:34.108135 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:54:34.108141 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:54:34.108227 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:54:34.133779 1437114 cri.go:89] found id: ""
	I1209 05:54:34.133803 1437114 logs.go:282] 0 containers: []
	W1209 05:54:34.133812 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:54:34.133819 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:54:34.133877 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:54:34.161528 1437114 cri.go:89] found id: ""
	I1209 05:54:34.161607 1437114 logs.go:282] 0 containers: []
	W1209 05:54:34.161639 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:54:34.161662 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:54:34.161779 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:54:34.191325 1437114 cri.go:89] found id: ""
	I1209 05:54:34.191400 1437114 logs.go:282] 0 containers: []
	W1209 05:54:34.191423 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:54:34.191443 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:54:34.191493 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:54:34.258939 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:54:34.258977 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:54:34.275607 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:54:34.275640 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:54:34.346638 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:54:34.338621    7924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:34.339268    7924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:34.340363    7924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:34.340982    7924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:34.342615    7924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:54:34.338621    7924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:34.339268    7924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:34.340363    7924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:34.340982    7924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:34.342615    7924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:54:34.346709 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:54:34.346754 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:54:34.373053 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:54:34.373092 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:54:36.904183 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:54:36.914625 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:54:36.914703 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:54:36.939165 1437114 cri.go:89] found id: ""
	I1209 05:54:36.939204 1437114 logs.go:282] 0 containers: []
	W1209 05:54:36.939213 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:54:36.939220 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:54:36.939280 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:54:36.968277 1437114 cri.go:89] found id: ""
	I1209 05:54:36.968303 1437114 logs.go:282] 0 containers: []
	W1209 05:54:36.968312 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:54:36.968319 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:54:36.968379 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:54:36.993837 1437114 cri.go:89] found id: ""
	I1209 05:54:36.993866 1437114 logs.go:282] 0 containers: []
	W1209 05:54:36.993875 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:54:36.993882 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:54:36.993939 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:54:37.029321 1437114 cri.go:89] found id: ""
	I1209 05:54:37.029358 1437114 logs.go:282] 0 containers: []
	W1209 05:54:37.029370 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:54:37.029381 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:54:37.029479 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:54:37.060208 1437114 cri.go:89] found id: ""
	I1209 05:54:37.060235 1437114 logs.go:282] 0 containers: []
	W1209 05:54:37.060244 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:54:37.060251 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:54:37.060311 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:54:37.085969 1437114 cri.go:89] found id: ""
	I1209 05:54:37.085992 1437114 logs.go:282] 0 containers: []
	W1209 05:54:37.086001 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:54:37.086007 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:54:37.086066 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:54:37.114324 1437114 cri.go:89] found id: ""
	I1209 05:54:37.114357 1437114 logs.go:282] 0 containers: []
	W1209 05:54:37.114367 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:54:37.114373 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:54:37.114478 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:54:37.143312 1437114 cri.go:89] found id: ""
	I1209 05:54:37.143339 1437114 logs.go:282] 0 containers: []
	W1209 05:54:37.143348 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:54:37.143357 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:54:37.143369 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:54:37.234893 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:54:37.226773    8026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:37.227615    8026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:37.228809    8026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:37.229450    8026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:37.231054    8026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:54:37.226773    8026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:37.227615    8026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:37.228809    8026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:37.229450    8026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:37.231054    8026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:54:37.234921 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:54:37.234933 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:54:37.262601 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:54:37.262635 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:54:37.289433 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:54:37.289458 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:54:37.345400 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:54:37.345435 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:54:39.861840 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:54:39.873772 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:54:39.873850 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:54:39.901691 1437114 cri.go:89] found id: ""
	I1209 05:54:39.901714 1437114 logs.go:282] 0 containers: []
	W1209 05:54:39.901725 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:54:39.901731 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:54:39.901793 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:54:39.925900 1437114 cri.go:89] found id: ""
	I1209 05:54:39.925935 1437114 logs.go:282] 0 containers: []
	W1209 05:54:39.925944 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:54:39.925950 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:54:39.926009 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:54:39.951997 1437114 cri.go:89] found id: ""
	I1209 05:54:39.952041 1437114 logs.go:282] 0 containers: []
	W1209 05:54:39.952050 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:54:39.952056 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:54:39.952116 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:54:39.980156 1437114 cri.go:89] found id: ""
	I1209 05:54:39.980182 1437114 logs.go:282] 0 containers: []
	W1209 05:54:39.980190 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:54:39.980196 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:54:39.980255 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:54:40.007109 1437114 cri.go:89] found id: ""
	I1209 05:54:40.007136 1437114 logs.go:282] 0 containers: []
	W1209 05:54:40.007146 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:54:40.007154 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:54:40.007234 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:54:40.056170 1437114 cri.go:89] found id: ""
	I1209 05:54:40.056197 1437114 logs.go:282] 0 containers: []
	W1209 05:54:40.056207 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:54:40.056214 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:54:40.056298 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:54:40.085850 1437114 cri.go:89] found id: ""
	I1209 05:54:40.085879 1437114 logs.go:282] 0 containers: []
	W1209 05:54:40.085888 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:54:40.085894 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:54:40.085960 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:54:40.118208 1437114 cri.go:89] found id: ""
	I1209 05:54:40.118245 1437114 logs.go:282] 0 containers: []
	W1209 05:54:40.118256 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:54:40.118267 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:54:40.118281 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:54:40.195166 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:54:40.184383    8140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:40.185244    8140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:40.187445    8140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:40.188458    8140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:40.189404    8140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:54:40.184383    8140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:40.185244    8140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:40.187445    8140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:40.188458    8140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:40.189404    8140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:54:40.195189 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:54:40.195203 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:54:40.223567 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:54:40.223651 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:54:40.266759 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:54:40.266786 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:54:40.323783 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:54:40.323818 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:54:42.842021 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:54:42.852681 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:54:42.852755 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:54:42.876598 1437114 cri.go:89] found id: ""
	I1209 05:54:42.876622 1437114 logs.go:282] 0 containers: []
	W1209 05:54:42.876631 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:54:42.876637 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:54:42.876694 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:54:42.901491 1437114 cri.go:89] found id: ""
	I1209 05:54:42.901515 1437114 logs.go:282] 0 containers: []
	W1209 05:54:42.901523 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:54:42.901529 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:54:42.901588 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:54:42.930050 1437114 cri.go:89] found id: ""
	I1209 05:54:42.930077 1437114 logs.go:282] 0 containers: []
	W1209 05:54:42.930086 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:54:42.930093 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:54:42.930151 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:54:42.953794 1437114 cri.go:89] found id: ""
	I1209 05:54:42.953817 1437114 logs.go:282] 0 containers: []
	W1209 05:54:42.953825 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:54:42.953837 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:54:42.953940 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:54:42.977300 1437114 cri.go:89] found id: ""
	I1209 05:54:42.977324 1437114 logs.go:282] 0 containers: []
	W1209 05:54:42.977333 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:54:42.977339 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:54:42.977416 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:54:43.001015 1437114 cri.go:89] found id: ""
	I1209 05:54:43.001080 1437114 logs.go:282] 0 containers: []
	W1209 05:54:43.001095 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:54:43.001103 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:54:43.001169 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:54:43.026886 1437114 cri.go:89] found id: ""
	I1209 05:54:43.026910 1437114 logs.go:282] 0 containers: []
	W1209 05:54:43.026918 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:54:43.026925 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:54:43.026984 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:54:43.057227 1437114 cri.go:89] found id: ""
	I1209 05:54:43.057253 1437114 logs.go:282] 0 containers: []
	W1209 05:54:43.057271 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:54:43.057281 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:54:43.057293 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:54:43.115319 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:54:43.115357 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:54:43.131310 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:54:43.131346 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:54:43.204953 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:54:43.196603    8257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:43.197525    8257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:43.199091    8257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:43.199623    8257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:43.201121    8257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:54:43.196603    8257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:43.197525    8257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:43.199091    8257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:43.199623    8257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:43.201121    8257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:54:43.204975 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:54:43.204987 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:54:43.231713 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:54:43.231747 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:54:45.766147 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:54:45.776210 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:54:45.776285 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:54:45.804782 1437114 cri.go:89] found id: ""
	I1209 05:54:45.804810 1437114 logs.go:282] 0 containers: []
	W1209 05:54:45.804857 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:54:45.804871 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:54:45.804939 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:54:45.828660 1437114 cri.go:89] found id: ""
	I1209 05:54:45.828684 1437114 logs.go:282] 0 containers: []
	W1209 05:54:45.828692 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:54:45.828698 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:54:45.828758 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:54:45.853575 1437114 cri.go:89] found id: ""
	I1209 05:54:45.853598 1437114 logs.go:282] 0 containers: []
	W1209 05:54:45.853606 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:54:45.853612 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:54:45.853667 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:54:45.877674 1437114 cri.go:89] found id: ""
	I1209 05:54:45.877697 1437114 logs.go:282] 0 containers: []
	W1209 05:54:45.877705 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:54:45.877711 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:54:45.877775 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:54:45.902246 1437114 cri.go:89] found id: ""
	I1209 05:54:45.902270 1437114 logs.go:282] 0 containers: []
	W1209 05:54:45.902284 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:54:45.902291 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:54:45.902347 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:54:45.929443 1437114 cri.go:89] found id: ""
	I1209 05:54:45.929517 1437114 logs.go:282] 0 containers: []
	W1209 05:54:45.929532 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:54:45.929539 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:54:45.929596 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:54:45.955032 1437114 cri.go:89] found id: ""
	I1209 05:54:45.955065 1437114 logs.go:282] 0 containers: []
	W1209 05:54:45.955074 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:54:45.955081 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:54:45.955147 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:54:45.983502 1437114 cri.go:89] found id: ""
	I1209 05:54:45.983527 1437114 logs.go:282] 0 containers: []
	W1209 05:54:45.983535 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:54:45.983544 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:54:45.983555 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:54:46.049253 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:54:46.049292 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:54:46.066199 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:54:46.066229 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:54:46.133498 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:54:46.124747    8372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:46.125334    8372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:46.126986    8372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:46.127505    8372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:46.129096    8372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:54:46.124747    8372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:46.125334    8372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:46.126986    8372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:46.127505    8372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:46.129096    8372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:54:46.133521 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:54:46.133534 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:54:46.159468 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:54:46.159500 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:54:48.698046 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:54:48.710430 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:54:48.710504 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:54:48.739692 1437114 cri.go:89] found id: ""
	I1209 05:54:48.739718 1437114 logs.go:282] 0 containers: []
	W1209 05:54:48.739726 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:54:48.739733 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:54:48.739790 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:54:48.764166 1437114 cri.go:89] found id: ""
	I1209 05:54:48.764192 1437114 logs.go:282] 0 containers: []
	W1209 05:54:48.764200 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:54:48.764206 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:54:48.764264 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:54:48.788074 1437114 cri.go:89] found id: ""
	I1209 05:54:48.788097 1437114 logs.go:282] 0 containers: []
	W1209 05:54:48.788114 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:54:48.788122 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:54:48.788189 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:54:48.813373 1437114 cri.go:89] found id: ""
	I1209 05:54:48.813398 1437114 logs.go:282] 0 containers: []
	W1209 05:54:48.813407 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:54:48.813414 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:54:48.813472 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:54:48.840222 1437114 cri.go:89] found id: ""
	I1209 05:54:48.840248 1437114 logs.go:282] 0 containers: []
	W1209 05:54:48.840256 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:54:48.840270 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:54:48.840331 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:54:48.869002 1437114 cri.go:89] found id: ""
	I1209 05:54:48.869025 1437114 logs.go:282] 0 containers: []
	W1209 05:54:48.869034 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:54:48.869041 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:54:48.869098 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:54:48.897074 1437114 cri.go:89] found id: ""
	I1209 05:54:48.897100 1437114 logs.go:282] 0 containers: []
	W1209 05:54:48.897108 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:54:48.897115 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:54:48.897193 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:54:48.920534 1437114 cri.go:89] found id: ""
	I1209 05:54:48.920559 1437114 logs.go:282] 0 containers: []
	W1209 05:54:48.920567 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:54:48.920576 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:54:48.920588 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:54:48.976882 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:54:48.976918 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:54:48.992754 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:54:48.992782 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:54:49.058058 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:54:49.049574    8487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:49.050149    8487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:49.051765    8487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:49.052269    8487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:49.053870    8487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:54:49.049574    8487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:49.050149    8487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:49.051765    8487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:49.052269    8487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:49.053870    8487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:54:49.058079 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:54:49.058092 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:54:49.083543 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:54:49.083578 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:54:51.613470 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:54:51.625228 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:54:51.625329 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:54:51.651832 1437114 cri.go:89] found id: ""
	I1209 05:54:51.651863 1437114 logs.go:282] 0 containers: []
	W1209 05:54:51.651871 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:54:51.651878 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:54:51.651989 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:54:51.689430 1437114 cri.go:89] found id: ""
	I1209 05:54:51.689471 1437114 logs.go:282] 0 containers: []
	W1209 05:54:51.689480 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:54:51.689486 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:54:51.689556 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:54:51.718333 1437114 cri.go:89] found id: ""
	I1209 05:54:51.718377 1437114 logs.go:282] 0 containers: []
	W1209 05:54:51.718387 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:54:51.718394 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:54:51.718468 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:54:51.748566 1437114 cri.go:89] found id: ""
	I1209 05:54:51.748641 1437114 logs.go:282] 0 containers: []
	W1209 05:54:51.748656 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:54:51.748663 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:54:51.748732 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:54:51.773149 1437114 cri.go:89] found id: ""
	I1209 05:54:51.773175 1437114 logs.go:282] 0 containers: []
	W1209 05:54:51.773184 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:54:51.773191 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:54:51.773283 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:54:51.802227 1437114 cri.go:89] found id: ""
	I1209 05:54:51.802253 1437114 logs.go:282] 0 containers: []
	W1209 05:54:51.802262 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:54:51.802272 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:54:51.802351 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:54:51.833926 1437114 cri.go:89] found id: ""
	I1209 05:54:51.833994 1437114 logs.go:282] 0 containers: []
	W1209 05:54:51.834016 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:54:51.834036 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:54:51.834126 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:54:51.859887 1437114 cri.go:89] found id: ""
	I1209 05:54:51.859919 1437114 logs.go:282] 0 containers: []
	W1209 05:54:51.859927 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:54:51.859937 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:54:51.859948 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:54:51.876110 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:54:51.876138 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:54:51.942848 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:54:51.934424    8598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:51.935014    8598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:51.936468    8598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:51.937091    8598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:51.938535    8598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:54:51.934424    8598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:51.935014    8598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:51.936468    8598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:51.937091    8598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:51.938535    8598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:54:51.942870 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:54:51.942883 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:54:51.968433 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:54:51.968466 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:54:51.996383 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:54:51.996421 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:54:54.554719 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:54:54.565346 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:54:54.565415 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:54:54.593426 1437114 cri.go:89] found id: ""
	I1209 05:54:54.593450 1437114 logs.go:282] 0 containers: []
	W1209 05:54:54.593458 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:54:54.593464 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:54:54.593522 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:54:54.621281 1437114 cri.go:89] found id: ""
	I1209 05:54:54.621304 1437114 logs.go:282] 0 containers: []
	W1209 05:54:54.621312 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:54:54.621318 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:54:54.621376 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:54:54.646126 1437114 cri.go:89] found id: ""
	I1209 05:54:54.646194 1437114 logs.go:282] 0 containers: []
	W1209 05:54:54.646216 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:54:54.646234 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:54:54.646318 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:54:54.674944 1437114 cri.go:89] found id: ""
	I1209 05:54:54.674986 1437114 logs.go:282] 0 containers: []
	W1209 05:54:54.675011 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:54:54.675029 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:54:54.675110 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:54:54.700733 1437114 cri.go:89] found id: ""
	I1209 05:54:54.700766 1437114 logs.go:282] 0 containers: []
	W1209 05:54:54.700775 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:54:54.700781 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:54:54.700860 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:54:54.733555 1437114 cri.go:89] found id: ""
	I1209 05:54:54.733634 1437114 logs.go:282] 0 containers: []
	W1209 05:54:54.733656 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:54:54.733676 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:54:54.733777 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:54:54.759852 1437114 cri.go:89] found id: ""
	I1209 05:54:54.759926 1437114 logs.go:282] 0 containers: []
	W1209 05:54:54.759949 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:54:54.759972 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:54:54.760110 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:54:54.784303 1437114 cri.go:89] found id: ""
	I1209 05:54:54.784377 1437114 logs.go:282] 0 containers: []
	W1209 05:54:54.784392 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:54:54.784402 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:54:54.784413 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:54:54.809753 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:54:54.809790 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:54:54.836589 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:54:54.836617 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:54:54.899737 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:54:54.899784 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:54:54.915785 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:54:54.915814 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:54:54.979896 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:54:54.971488    8724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:54.971906    8724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:54.973479    8724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:54.974140    8724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:54.976063    8724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:54:54.971488    8724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:54.971906    8724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:54.973479    8724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:54.974140    8724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:54.976063    8724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:54:57.480193 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:54:57.491395 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:54:57.491473 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:54:57.518091 1437114 cri.go:89] found id: ""
	I1209 05:54:57.518114 1437114 logs.go:282] 0 containers: []
	W1209 05:54:57.518123 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:54:57.518130 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:54:57.518191 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:54:57.545921 1437114 cri.go:89] found id: ""
	I1209 05:54:57.545954 1437114 logs.go:282] 0 containers: []
	W1209 05:54:57.545962 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:54:57.545969 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:54:57.546037 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:54:57.570249 1437114 cri.go:89] found id: ""
	I1209 05:54:57.570280 1437114 logs.go:282] 0 containers: []
	W1209 05:54:57.570290 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:54:57.570296 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:54:57.570367 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:54:57.597541 1437114 cri.go:89] found id: ""
	I1209 05:54:57.597565 1437114 logs.go:282] 0 containers: []
	W1209 05:54:57.597576 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:54:57.597583 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:54:57.597639 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:54:57.625351 1437114 cri.go:89] found id: ""
	I1209 05:54:57.625374 1437114 logs.go:282] 0 containers: []
	W1209 05:54:57.625382 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:54:57.625388 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:54:57.625446 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:54:57.653430 1437114 cri.go:89] found id: ""
	I1209 05:54:57.653504 1437114 logs.go:282] 0 containers: []
	W1209 05:54:57.653520 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:54:57.653528 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:54:57.653592 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:54:57.686655 1437114 cri.go:89] found id: ""
	I1209 05:54:57.686681 1437114 logs.go:282] 0 containers: []
	W1209 05:54:57.686704 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:54:57.686711 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:54:57.686783 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:54:57.715897 1437114 cri.go:89] found id: ""
	I1209 05:54:57.715924 1437114 logs.go:282] 0 containers: []
	W1209 05:54:57.715932 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:54:57.715941 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:54:57.715952 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:54:57.781835 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:54:57.781871 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:54:57.798499 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:54:57.798527 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:54:57.870136 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:54:57.856259    8825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:57.861442    8825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:57.864278    8825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:57.864723    8825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:57.866272    8825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:54:57.856259    8825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:57.861442    8825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:57.864278    8825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:57.864723    8825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:57.866272    8825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:54:57.870169 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:54:57.870182 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:54:57.894760 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:54:57.894794 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:55:00.423491 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:55:00.436333 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:55:00.436416 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:55:00.477329 1437114 cri.go:89] found id: ""
	I1209 05:55:00.477357 1437114 logs.go:282] 0 containers: []
	W1209 05:55:00.477367 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:55:00.477373 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:55:00.477440 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:55:00.510439 1437114 cri.go:89] found id: ""
	I1209 05:55:00.510467 1437114 logs.go:282] 0 containers: []
	W1209 05:55:00.510477 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:55:00.510483 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:55:00.510565 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:55:00.539373 1437114 cri.go:89] found id: ""
	I1209 05:55:00.539404 1437114 logs.go:282] 0 containers: []
	W1209 05:55:00.539413 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:55:00.539420 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:55:00.539484 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:55:00.567440 1437114 cri.go:89] found id: ""
	I1209 05:55:00.567470 1437114 logs.go:282] 0 containers: []
	W1209 05:55:00.567479 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:55:00.567486 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:55:00.567547 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:55:00.603417 1437114 cri.go:89] found id: ""
	I1209 05:55:00.603442 1437114 logs.go:282] 0 containers: []
	W1209 05:55:00.603450 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:55:00.603456 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:55:00.603515 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:55:00.628877 1437114 cri.go:89] found id: ""
	I1209 05:55:00.628900 1437114 logs.go:282] 0 containers: []
	W1209 05:55:00.628909 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:55:00.628915 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:55:00.628972 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:55:00.657533 1437114 cri.go:89] found id: ""
	I1209 05:55:00.657562 1437114 logs.go:282] 0 containers: []
	W1209 05:55:00.657571 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:55:00.657578 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:55:00.657638 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:55:00.686066 1437114 cri.go:89] found id: ""
	I1209 05:55:00.686090 1437114 logs.go:282] 0 containers: []
	W1209 05:55:00.686099 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:55:00.686108 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:55:00.686120 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:55:00.708508 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:55:00.708588 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:55:00.777301 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:55:00.768863    8933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:00.769274    8933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:00.770892    8933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:00.771415    8933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:00.772464    8933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:55:00.768863    8933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:00.769274    8933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:00.770892    8933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:00.771415    8933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:00.772464    8933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:55:00.777372 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:55:00.777394 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:55:00.802304 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:55:00.802337 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:55:00.829410 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:55:00.829436 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:55:03.385877 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:55:03.396171 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:55:03.396238 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:55:03.420742 1437114 cri.go:89] found id: ""
	I1209 05:55:03.420767 1437114 logs.go:282] 0 containers: []
	W1209 05:55:03.420775 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:55:03.420781 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:55:03.420837 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:55:03.458835 1437114 cri.go:89] found id: ""
	I1209 05:55:03.458861 1437114 logs.go:282] 0 containers: []
	W1209 05:55:03.458869 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:55:03.458876 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:55:03.458934 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:55:03.488300 1437114 cri.go:89] found id: ""
	I1209 05:55:03.488326 1437114 logs.go:282] 0 containers: []
	W1209 05:55:03.488334 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:55:03.488340 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:55:03.488400 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:55:03.516405 1437114 cri.go:89] found id: ""
	I1209 05:55:03.516432 1437114 logs.go:282] 0 containers: []
	W1209 05:55:03.516440 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:55:03.516446 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:55:03.516506 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:55:03.545401 1437114 cri.go:89] found id: ""
	I1209 05:55:03.545467 1437114 logs.go:282] 0 containers: []
	W1209 05:55:03.545492 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:55:03.545510 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:55:03.545597 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:55:03.570243 1437114 cri.go:89] found id: ""
	I1209 05:55:03.570316 1437114 logs.go:282] 0 containers: []
	W1209 05:55:03.570342 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:55:03.570357 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:55:03.570449 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:55:03.594930 1437114 cri.go:89] found id: ""
	I1209 05:55:03.594955 1437114 logs.go:282] 0 containers: []
	W1209 05:55:03.594965 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:55:03.594971 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:55:03.595030 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:55:03.619052 1437114 cri.go:89] found id: ""
	I1209 05:55:03.619080 1437114 logs.go:282] 0 containers: []
	W1209 05:55:03.619089 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:55:03.619098 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:55:03.619114 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:55:03.676980 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:55:03.677019 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:55:03.697398 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:55:03.697427 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:55:03.769575 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:55:03.761997    9046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:03.762424    9046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:03.763695    9046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:03.764060    9046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:03.765630    9046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:55:03.761997    9046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:03.762424    9046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:03.763695    9046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:03.764060    9046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:03.765630    9046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:55:03.769607 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:55:03.769620 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:55:03.794589 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:55:03.794623 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:55:06.321615 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:55:06.331929 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:55:06.331999 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:55:06.358377 1437114 cri.go:89] found id: ""
	I1209 05:55:06.358403 1437114 logs.go:282] 0 containers: []
	W1209 05:55:06.358411 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:55:06.358418 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:55:06.358481 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:55:06.384508 1437114 cri.go:89] found id: ""
	I1209 05:55:06.384533 1437114 logs.go:282] 0 containers: []
	W1209 05:55:06.384542 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:55:06.384548 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:55:06.384607 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:55:06.408779 1437114 cri.go:89] found id: ""
	I1209 05:55:06.408801 1437114 logs.go:282] 0 containers: []
	W1209 05:55:06.408810 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:55:06.408816 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:55:06.408874 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:55:06.441591 1437114 cri.go:89] found id: ""
	I1209 05:55:06.441613 1437114 logs.go:282] 0 containers: []
	W1209 05:55:06.441622 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:55:06.441628 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:55:06.441689 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:55:06.474533 1437114 cri.go:89] found id: ""
	I1209 05:55:06.474555 1437114 logs.go:282] 0 containers: []
	W1209 05:55:06.474567 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:55:06.474574 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:55:06.474706 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:55:06.503583 1437114 cri.go:89] found id: ""
	I1209 05:55:06.503655 1437114 logs.go:282] 0 containers: []
	W1209 05:55:06.503677 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:55:06.503697 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:55:06.503785 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:55:06.529409 1437114 cri.go:89] found id: ""
	I1209 05:55:06.529434 1437114 logs.go:282] 0 containers: []
	W1209 05:55:06.529443 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:55:06.529449 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:55:06.529508 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:55:06.559184 1437114 cri.go:89] found id: ""
	I1209 05:55:06.559254 1437114 logs.go:282] 0 containers: []
	W1209 05:55:06.559289 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:55:06.559317 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:55:06.559341 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:55:06.616116 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:55:06.616152 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:55:06.632189 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:55:06.632218 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:55:06.703879 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:55:06.694883    9157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:06.695859    9157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:06.697486    9157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:06.698063    9157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:06.699592    9157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:55:06.694883    9157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:06.695859    9157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:06.697486    9157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:06.698063    9157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:06.699592    9157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:55:06.703908 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:55:06.703924 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:55:06.733107 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:55:06.733166 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:55:09.268085 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:55:09.278413 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:55:09.278488 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:55:09.301738 1437114 cri.go:89] found id: ""
	I1209 05:55:09.301764 1437114 logs.go:282] 0 containers: []
	W1209 05:55:09.301773 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:55:09.301779 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:55:09.301836 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:55:09.329939 1437114 cri.go:89] found id: ""
	I1209 05:55:09.329962 1437114 logs.go:282] 0 containers: []
	W1209 05:55:09.329970 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:55:09.329976 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:55:09.330032 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:55:09.358792 1437114 cri.go:89] found id: ""
	I1209 05:55:09.358825 1437114 logs.go:282] 0 containers: []
	W1209 05:55:09.358834 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:55:09.358840 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:55:09.358934 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:55:09.383783 1437114 cri.go:89] found id: ""
	I1209 05:55:09.383806 1437114 logs.go:282] 0 containers: []
	W1209 05:55:09.383814 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:55:09.383820 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:55:09.383881 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:55:09.409956 1437114 cri.go:89] found id: ""
	I1209 05:55:09.409982 1437114 logs.go:282] 0 containers: []
	W1209 05:55:09.409990 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:55:09.409997 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:55:09.410054 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:55:09.442388 1437114 cri.go:89] found id: ""
	I1209 05:55:09.442471 1437114 logs.go:282] 0 containers: []
	W1209 05:55:09.442502 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:55:09.442524 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:55:09.442611 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:55:09.472213 1437114 cri.go:89] found id: ""
	I1209 05:55:09.472234 1437114 logs.go:282] 0 containers: []
	W1209 05:55:09.472243 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:55:09.472249 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:55:09.472306 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:55:09.500348 1437114 cri.go:89] found id: ""
	I1209 05:55:09.500372 1437114 logs.go:282] 0 containers: []
	W1209 05:55:09.500381 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:55:09.500390 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:55:09.500401 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:55:09.556960 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:55:09.556998 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:55:09.573143 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:55:09.573173 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:55:09.641645 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:55:09.634078    9266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:09.634591    9266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:09.636259    9266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:09.636782    9266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:09.637775    9266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:55:09.634078    9266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:09.634591    9266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:09.636259    9266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:09.636782    9266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:09.637775    9266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:55:09.641669 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:55:09.641682 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:55:09.667979 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:55:09.668100 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:55:12.205096 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:55:12.215660 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:55:12.215729 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:55:12.239566 1437114 cri.go:89] found id: ""
	I1209 05:55:12.239594 1437114 logs.go:282] 0 containers: []
	W1209 05:55:12.239603 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:55:12.239609 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:55:12.239668 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:55:12.267891 1437114 cri.go:89] found id: ""
	I1209 05:55:12.267914 1437114 logs.go:282] 0 containers: []
	W1209 05:55:12.267924 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:55:12.267930 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:55:12.267992 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:55:12.296494 1437114 cri.go:89] found id: ""
	I1209 05:55:12.296523 1437114 logs.go:282] 0 containers: []
	W1209 05:55:12.296532 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:55:12.296539 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:55:12.296602 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:55:12.322105 1437114 cri.go:89] found id: ""
	I1209 05:55:12.322135 1437114 logs.go:282] 0 containers: []
	W1209 05:55:12.322144 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:55:12.322151 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:55:12.322208 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:55:12.347978 1437114 cri.go:89] found id: ""
	I1209 05:55:12.348001 1437114 logs.go:282] 0 containers: []
	W1209 05:55:12.348010 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:55:12.348038 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:55:12.348096 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:55:12.372241 1437114 cri.go:89] found id: ""
	I1209 05:55:12.372275 1437114 logs.go:282] 0 containers: []
	W1209 05:55:12.372311 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:55:12.372318 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:55:12.372384 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:55:12.397758 1437114 cri.go:89] found id: ""
	I1209 05:55:12.397784 1437114 logs.go:282] 0 containers: []
	W1209 05:55:12.397792 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:55:12.397799 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:55:12.397860 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:55:12.422922 1437114 cri.go:89] found id: ""
	I1209 05:55:12.422948 1437114 logs.go:282] 0 containers: []
	W1209 05:55:12.422958 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:55:12.422968 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:55:12.422981 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:55:12.480231 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:55:12.480268 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:55:12.497991 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:55:12.498029 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:55:12.565247 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:55:12.557686    9379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:12.558053    9379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:12.559575    9379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:12.559888    9379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:12.561291    9379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:55:12.557686    9379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:12.558053    9379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:12.559575    9379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:12.559888    9379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:12.561291    9379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:55:12.565279 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:55:12.565293 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:55:12.590420 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:55:12.590459 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:55:15.122535 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:55:15.133065 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:55:15.133140 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:55:15.158369 1437114 cri.go:89] found id: ""
	I1209 05:55:15.158393 1437114 logs.go:282] 0 containers: []
	W1209 05:55:15.158401 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:55:15.158407 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:55:15.158492 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:55:15.184526 1437114 cri.go:89] found id: ""
	I1209 05:55:15.184550 1437114 logs.go:282] 0 containers: []
	W1209 05:55:15.184558 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:55:15.184564 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:55:15.184627 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:55:15.210248 1437114 cri.go:89] found id: ""
	I1209 05:55:15.210288 1437114 logs.go:282] 0 containers: []
	W1209 05:55:15.210300 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:55:15.210312 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:55:15.210376 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:55:15.239458 1437114 cri.go:89] found id: ""
	I1209 05:55:15.239486 1437114 logs.go:282] 0 containers: []
	W1209 05:55:15.239495 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:55:15.239501 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:55:15.239560 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:55:15.265625 1437114 cri.go:89] found id: ""
	I1209 05:55:15.265649 1437114 logs.go:282] 0 containers: []
	W1209 05:55:15.265658 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:55:15.265664 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:55:15.265729 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:55:15.289543 1437114 cri.go:89] found id: ""
	I1209 05:55:15.289577 1437114 logs.go:282] 0 containers: []
	W1209 05:55:15.289587 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:55:15.289593 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:55:15.289663 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:55:15.314575 1437114 cri.go:89] found id: ""
	I1209 05:55:15.314610 1437114 logs.go:282] 0 containers: []
	W1209 05:55:15.314618 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:55:15.314625 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:55:15.314704 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:55:15.339832 1437114 cri.go:89] found id: ""
	I1209 05:55:15.339858 1437114 logs.go:282] 0 containers: []
	W1209 05:55:15.339865 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:55:15.339875 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:55:15.339890 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:55:15.356748 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:55:15.356774 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:55:15.418122 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:55:15.410189    9483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:15.410797    9483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:15.412374    9483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:15.412679    9483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:15.414149    9483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:55:15.410189    9483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:15.410797    9483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:15.412374    9483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:15.412679    9483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:15.414149    9483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:55:15.418145 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:55:15.418157 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:55:15.446826 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:55:15.446866 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:55:15.483531 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:55:15.483560 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:55:18.042444 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:55:18.053775 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:55:18.053853 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:55:18.090768 1437114 cri.go:89] found id: ""
	I1209 05:55:18.090790 1437114 logs.go:282] 0 containers: []
	W1209 05:55:18.090800 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:55:18.090806 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:55:18.090869 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:55:18.117411 1437114 cri.go:89] found id: ""
	I1209 05:55:18.117438 1437114 logs.go:282] 0 containers: []
	W1209 05:55:18.117448 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:55:18.117458 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:55:18.117516 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:55:18.143495 1437114 cri.go:89] found id: ""
	I1209 05:55:18.143523 1437114 logs.go:282] 0 containers: []
	W1209 05:55:18.143531 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:55:18.143538 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:55:18.143601 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:55:18.169282 1437114 cri.go:89] found id: ""
	I1209 05:55:18.169310 1437114 logs.go:282] 0 containers: []
	W1209 05:55:18.169319 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:55:18.169325 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:55:18.169387 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:55:18.194143 1437114 cri.go:89] found id: ""
	I1209 05:55:18.194210 1437114 logs.go:282] 0 containers: []
	W1209 05:55:18.194234 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:55:18.194248 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:55:18.194319 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:55:18.218540 1437114 cri.go:89] found id: ""
	I1209 05:55:18.218564 1437114 logs.go:282] 0 containers: []
	W1209 05:55:18.218573 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:55:18.218579 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:55:18.218635 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:55:18.242500 1437114 cri.go:89] found id: ""
	I1209 05:55:18.242533 1437114 logs.go:282] 0 containers: []
	W1209 05:55:18.242541 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:55:18.242554 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:55:18.242625 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:55:18.268163 1437114 cri.go:89] found id: ""
	I1209 05:55:18.268189 1437114 logs.go:282] 0 containers: []
	W1209 05:55:18.268198 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:55:18.268207 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:55:18.268219 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:55:18.325316 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:55:18.325352 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:55:18.341326 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:55:18.341355 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:55:18.406565 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:55:18.398134    9600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:18.398838    9600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:18.400544    9600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:18.401064    9600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:18.402624    9600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:55:18.398134    9600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:18.398838    9600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:18.400544    9600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:18.401064    9600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:18.402624    9600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:55:18.406588 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:55:18.406601 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:55:18.432715 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:55:18.433008 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:55:20.971861 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:55:20.983326 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:55:20.983402 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:55:21.009562 1437114 cri.go:89] found id: ""
	I1209 05:55:21.009588 1437114 logs.go:282] 0 containers: []
	W1209 05:55:21.009598 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:55:21.009606 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:55:21.009671 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:55:21.034329 1437114 cri.go:89] found id: ""
	I1209 05:55:21.034355 1437114 logs.go:282] 0 containers: []
	W1209 05:55:21.034364 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:55:21.034370 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:55:21.034444 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:55:21.058554 1437114 cri.go:89] found id: ""
	I1209 05:55:21.058575 1437114 logs.go:282] 0 containers: []
	W1209 05:55:21.058584 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:55:21.058592 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:55:21.058648 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:55:21.086391 1437114 cri.go:89] found id: ""
	I1209 05:55:21.086416 1437114 logs.go:282] 0 containers: []
	W1209 05:55:21.086425 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:55:21.086432 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:55:21.086495 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:55:21.113734 1437114 cri.go:89] found id: ""
	I1209 05:55:21.113757 1437114 logs.go:282] 0 containers: []
	W1209 05:55:21.113771 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:55:21.113777 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:55:21.113836 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:55:21.138081 1437114 cri.go:89] found id: ""
	I1209 05:55:21.138106 1437114 logs.go:282] 0 containers: []
	W1209 05:55:21.138115 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:55:21.138122 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:55:21.138188 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:55:21.162430 1437114 cri.go:89] found id: ""
	I1209 05:55:21.162454 1437114 logs.go:282] 0 containers: []
	W1209 05:55:21.162462 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:55:21.162468 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:55:21.162527 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:55:21.187241 1437114 cri.go:89] found id: ""
	I1209 05:55:21.187269 1437114 logs.go:282] 0 containers: []
	W1209 05:55:21.187277 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:55:21.187286 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:55:21.187298 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:55:21.243731 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:55:21.243768 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:55:21.259723 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:55:21.259752 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:55:21.331265 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:55:21.322926    9714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:21.323669    9714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:21.325163    9714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:21.325582    9714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:21.327036    9714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:55:21.322926    9714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:21.323669    9714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:21.325163    9714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:21.325582    9714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:21.327036    9714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:55:21.331287 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:55:21.331300 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:55:21.357424 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:55:21.357460 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:55:23.888418 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:55:23.899458 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:55:23.899526 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:55:23.923896 1437114 cri.go:89] found id: ""
	I1209 05:55:23.923962 1437114 logs.go:282] 0 containers: []
	W1209 05:55:23.923986 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:55:23.924004 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:55:23.924112 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:55:23.951339 1437114 cri.go:89] found id: ""
	I1209 05:55:23.951409 1437114 logs.go:282] 0 containers: []
	W1209 05:55:23.951432 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:55:23.951450 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:55:23.951535 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:55:23.980727 1437114 cri.go:89] found id: ""
	I1209 05:55:23.980797 1437114 logs.go:282] 0 containers: []
	W1209 05:55:23.980821 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:55:23.980838 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:55:23.980927 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:55:24.018661 1437114 cri.go:89] found id: ""
	I1209 05:55:24.018691 1437114 logs.go:282] 0 containers: []
	W1209 05:55:24.018702 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:55:24.018709 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:55:24.018778 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:55:24.049508 1437114 cri.go:89] found id: ""
	I1209 05:55:24.049536 1437114 logs.go:282] 0 containers: []
	W1209 05:55:24.049545 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:55:24.049551 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:55:24.049610 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:55:24.074712 1437114 cri.go:89] found id: ""
	I1209 05:55:24.074741 1437114 logs.go:282] 0 containers: []
	W1209 05:55:24.074751 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:55:24.074757 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:55:24.074825 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:55:24.100769 1437114 cri.go:89] found id: ""
	I1209 05:55:24.100795 1437114 logs.go:282] 0 containers: []
	W1209 05:55:24.100804 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:55:24.100810 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:55:24.100871 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:55:24.125003 1437114 cri.go:89] found id: ""
	I1209 05:55:24.125031 1437114 logs.go:282] 0 containers: []
	W1209 05:55:24.125039 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:55:24.125049 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:55:24.125061 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:55:24.194763 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:55:24.186517    9818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:24.187020    9818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:24.188525    9818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:24.188998    9818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:24.190667    9818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:55:24.186517    9818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:24.187020    9818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:24.188525    9818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:24.188998    9818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:24.190667    9818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:55:24.194832 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:55:24.194870 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:55:24.220205 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:55:24.220239 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:55:24.246742 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:55:24.246769 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:55:24.303551 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:55:24.303584 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:55:26.819975 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:55:26.830655 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:55:26.830725 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:55:26.858629 1437114 cri.go:89] found id: ""
	I1209 05:55:26.858653 1437114 logs.go:282] 0 containers: []
	W1209 05:55:26.858661 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:55:26.858667 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:55:26.858733 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:55:26.883327 1437114 cri.go:89] found id: ""
	I1209 05:55:26.883354 1437114 logs.go:282] 0 containers: []
	W1209 05:55:26.883363 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:55:26.883369 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:55:26.883431 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:55:26.909455 1437114 cri.go:89] found id: ""
	I1209 05:55:26.909475 1437114 logs.go:282] 0 containers: []
	W1209 05:55:26.909484 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:55:26.909490 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:55:26.909551 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:55:26.940313 1437114 cri.go:89] found id: ""
	I1209 05:55:26.940345 1437114 logs.go:282] 0 containers: []
	W1209 05:55:26.940358 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:55:26.940365 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:55:26.940432 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:55:26.974610 1437114 cri.go:89] found id: ""
	I1209 05:55:26.974686 1437114 logs.go:282] 0 containers: []
	W1209 05:55:26.974708 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:55:26.974725 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:55:26.974817 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:55:27.007512 1437114 cri.go:89] found id: ""
	I1209 05:55:27.007592 1437114 logs.go:282] 0 containers: []
	W1209 05:55:27.007616 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:55:27.007637 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:55:27.007748 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:55:27.032955 1437114 cri.go:89] found id: ""
	I1209 05:55:27.033029 1437114 logs.go:282] 0 containers: []
	W1209 05:55:27.033053 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:55:27.033071 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:55:27.033155 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:55:27.057112 1437114 cri.go:89] found id: ""
	I1209 05:55:27.057177 1437114 logs.go:282] 0 containers: []
	W1209 05:55:27.057191 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:55:27.057202 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:55:27.057219 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:55:27.118936 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:55:27.110736    9930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:27.111264    9930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:27.112691    9930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:27.112981    9930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:27.114451    9930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:55:27.110736    9930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:27.111264    9930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:27.112691    9930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:27.112981    9930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:27.114451    9930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:55:27.118961 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:55:27.118974 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:55:27.144106 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:55:27.144179 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:55:27.174234 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:55:27.174260 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:55:27.230096 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:55:27.230129 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:55:29.746369 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:55:29.756575 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:55:29.756649 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:55:29.784727 1437114 cri.go:89] found id: ""
	I1209 05:55:29.784750 1437114 logs.go:282] 0 containers: []
	W1209 05:55:29.784758 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:55:29.784764 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:55:29.784824 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:55:29.808671 1437114 cri.go:89] found id: ""
	I1209 05:55:29.808696 1437114 logs.go:282] 0 containers: []
	W1209 05:55:29.808705 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:55:29.808711 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:55:29.808793 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:55:29.832440 1437114 cri.go:89] found id: ""
	I1209 05:55:29.832470 1437114 logs.go:282] 0 containers: []
	W1209 05:55:29.832479 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:55:29.832485 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:55:29.832549 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:55:29.857781 1437114 cri.go:89] found id: ""
	I1209 05:55:29.857807 1437114 logs.go:282] 0 containers: []
	W1209 05:55:29.857815 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:55:29.857821 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:55:29.857901 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:55:29.882048 1437114 cri.go:89] found id: ""
	I1209 05:55:29.882073 1437114 logs.go:282] 0 containers: []
	W1209 05:55:29.882081 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:55:29.882087 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:55:29.882176 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:55:29.905398 1437114 cri.go:89] found id: ""
	I1209 05:55:29.905422 1437114 logs.go:282] 0 containers: []
	W1209 05:55:29.905431 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:55:29.905438 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:55:29.905526 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:55:29.931783 1437114 cri.go:89] found id: ""
	I1209 05:55:29.931816 1437114 logs.go:282] 0 containers: []
	W1209 05:55:29.931824 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:55:29.931831 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:55:29.931903 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:55:29.961633 1437114 cri.go:89] found id: ""
	I1209 05:55:29.961665 1437114 logs.go:282] 0 containers: []
	W1209 05:55:29.961673 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:55:29.961683 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:55:29.961695 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:55:30.041769 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:55:30.025451   10042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:30.026529   10042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:30.027374   10042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:30.029780   10042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:30.030693   10042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:55:30.025451   10042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:30.026529   10042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:30.027374   10042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:30.029780   10042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:30.030693   10042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:55:30.041793 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:55:30.041807 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:55:30.069912 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:55:30.069946 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:55:30.104202 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:55:30.104232 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:55:30.162750 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:55:30.162784 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:55:32.680152 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:55:32.694260 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:55:32.694425 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:55:32.728965 1437114 cri.go:89] found id: ""
	I1209 05:55:32.729064 1437114 logs.go:282] 0 containers: []
	W1209 05:55:32.729088 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:55:32.729108 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:55:32.729212 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:55:32.760196 1437114 cri.go:89] found id: ""
	I1209 05:55:32.760220 1437114 logs.go:282] 0 containers: []
	W1209 05:55:32.760228 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:55:32.760235 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:55:32.760303 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:55:32.785415 1437114 cri.go:89] found id: ""
	I1209 05:55:32.785448 1437114 logs.go:282] 0 containers: []
	W1209 05:55:32.785457 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:55:32.785463 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:55:32.785528 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:55:32.809252 1437114 cri.go:89] found id: ""
	I1209 05:55:32.809327 1437114 logs.go:282] 0 containers: []
	W1209 05:55:32.809343 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:55:32.809357 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:55:32.809417 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:55:32.834255 1437114 cri.go:89] found id: ""
	I1209 05:55:32.834281 1437114 logs.go:282] 0 containers: []
	W1209 05:55:32.834295 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:55:32.834302 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:55:32.834362 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:55:32.859400 1437114 cri.go:89] found id: ""
	I1209 05:55:32.859426 1437114 logs.go:282] 0 containers: []
	W1209 05:55:32.859443 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:55:32.859450 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:55:32.859519 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:55:32.897012 1437114 cri.go:89] found id: ""
	I1209 05:55:32.897037 1437114 logs.go:282] 0 containers: []
	W1209 05:55:32.897046 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:55:32.897053 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:55:32.897167 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:55:32.921653 1437114 cri.go:89] found id: ""
	I1209 05:55:32.921685 1437114 logs.go:282] 0 containers: []
	W1209 05:55:32.921693 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:55:32.921703 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:55:32.921713 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:55:32.948373 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:55:32.948454 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:55:32.981605 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:55:32.981678 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:55:33.043445 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:55:33.043481 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:55:33.059128 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:55:33.059160 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:55:33.122257 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:55:33.113462   10172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:33.113864   10172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:33.115638   10172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:33.116342   10172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:33.117919   10172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:55:33.113462   10172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:33.113864   10172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:33.115638   10172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:33.116342   10172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:33.117919   10172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:55:35.623296 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:55:35.635539 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:55:35.635647 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:55:35.663702 1437114 cri.go:89] found id: ""
	I1209 05:55:35.663741 1437114 logs.go:282] 0 containers: []
	W1209 05:55:35.663753 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:55:35.663760 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:55:35.663865 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:55:35.707406 1437114 cri.go:89] found id: ""
	I1209 05:55:35.707485 1437114 logs.go:282] 0 containers: []
	W1209 05:55:35.707508 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:55:35.707544 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:55:35.707629 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:55:35.734669 1437114 cri.go:89] found id: ""
	I1209 05:55:35.734749 1437114 logs.go:282] 0 containers: []
	W1209 05:55:35.734771 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:55:35.734811 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:55:35.734897 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:55:35.764935 1437114 cri.go:89] found id: ""
	I1209 05:55:35.765012 1437114 logs.go:282] 0 containers: []
	W1209 05:55:35.765036 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:55:35.765054 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:55:35.765127 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:55:35.788829 1437114 cri.go:89] found id: ""
	I1209 05:55:35.788853 1437114 logs.go:282] 0 containers: []
	W1209 05:55:35.788869 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:55:35.788876 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:55:35.788978 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:55:35.813639 1437114 cri.go:89] found id: ""
	I1209 05:55:35.813666 1437114 logs.go:282] 0 containers: []
	W1209 05:55:35.813674 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:55:35.813681 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:55:35.813787 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:55:35.843416 1437114 cri.go:89] found id: ""
	I1209 05:55:35.843460 1437114 logs.go:282] 0 containers: []
	W1209 05:55:35.843469 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:55:35.843481 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:55:35.843555 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:55:35.868194 1437114 cri.go:89] found id: ""
	I1209 05:55:35.868221 1437114 logs.go:282] 0 containers: []
	W1209 05:55:35.868231 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:55:35.868239 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:55:35.868251 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:55:35.925041 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:55:35.925080 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:55:35.951129 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:55:35.951341 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:55:36.030987 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:55:36.022457   10271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:36.023229   10271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:36.023993   10271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:36.025131   10271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:36.025699   10271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:55:36.022457   10271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:36.023229   10271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:36.023993   10271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:36.025131   10271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:36.025699   10271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:55:36.031012 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:55:36.031026 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:55:36.058849 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:55:36.058884 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:55:38.588358 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:55:38.598423 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:55:38.598488 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:55:38.622572 1437114 cri.go:89] found id: ""
	I1209 05:55:38.622596 1437114 logs.go:282] 0 containers: []
	W1209 05:55:38.622605 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:55:38.622612 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:55:38.622669 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:55:38.650917 1437114 cri.go:89] found id: ""
	I1209 05:55:38.650942 1437114 logs.go:282] 0 containers: []
	W1209 05:55:38.650950 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:55:38.650956 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:55:38.651013 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:55:38.677402 1437114 cri.go:89] found id: ""
	I1209 05:55:38.677435 1437114 logs.go:282] 0 containers: []
	W1209 05:55:38.677444 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:55:38.677451 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:55:38.677558 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:55:38.707295 1437114 cri.go:89] found id: ""
	I1209 05:55:38.707328 1437114 logs.go:282] 0 containers: []
	W1209 05:55:38.707337 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:55:38.707344 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:55:38.707453 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:55:38.740627 1437114 cri.go:89] found id: ""
	I1209 05:55:38.740652 1437114 logs.go:282] 0 containers: []
	W1209 05:55:38.740660 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:55:38.740667 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:55:38.740727 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:55:38.764991 1437114 cri.go:89] found id: ""
	I1209 05:55:38.765017 1437114 logs.go:282] 0 containers: []
	W1209 05:55:38.765027 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:55:38.765033 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:55:38.765095 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:55:38.789303 1437114 cri.go:89] found id: ""
	I1209 05:55:38.789328 1437114 logs.go:282] 0 containers: []
	W1209 05:55:38.789336 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:55:38.789343 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:55:38.789401 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:55:38.812509 1437114 cri.go:89] found id: ""
	I1209 05:55:38.812533 1437114 logs.go:282] 0 containers: []
	W1209 05:55:38.812541 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:55:38.812551 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:55:38.812562 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:55:38.869277 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:55:38.869309 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:55:38.885634 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:55:38.885663 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:55:38.967787 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:55:38.957406   10382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:38.958335   10382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:38.960358   10382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:38.961032   10382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:38.963013   10382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:55:38.957406   10382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:38.958335   10382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:38.960358   10382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:38.961032   10382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:38.963013   10382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:55:38.967812 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:55:38.967828 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:55:39.000576 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:55:39.000615 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:55:41.533393 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:55:41.544133 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:55:41.544208 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:55:41.569392 1437114 cri.go:89] found id: ""
	I1209 05:55:41.569418 1437114 logs.go:282] 0 containers: []
	W1209 05:55:41.569428 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:55:41.569436 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:55:41.569499 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:55:41.595491 1437114 cri.go:89] found id: ""
	I1209 05:55:41.595517 1437114 logs.go:282] 0 containers: []
	W1209 05:55:41.595526 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:55:41.595532 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:55:41.595592 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:55:41.622211 1437114 cri.go:89] found id: ""
	I1209 05:55:41.622246 1437114 logs.go:282] 0 containers: []
	W1209 05:55:41.622256 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:55:41.622263 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:55:41.622323 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:55:41.646745 1437114 cri.go:89] found id: ""
	I1209 05:55:41.646770 1437114 logs.go:282] 0 containers: []
	W1209 05:55:41.646779 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:55:41.646785 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:55:41.646846 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:55:41.674665 1437114 cri.go:89] found id: ""
	I1209 05:55:41.674689 1437114 logs.go:282] 0 containers: []
	W1209 05:55:41.674699 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:55:41.674706 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:55:41.674768 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:55:41.702586 1437114 cri.go:89] found id: ""
	I1209 05:55:41.702610 1437114 logs.go:282] 0 containers: []
	W1209 05:55:41.702619 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:55:41.702628 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:55:41.702704 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:55:41.741493 1437114 cri.go:89] found id: ""
	I1209 05:55:41.741515 1437114 logs.go:282] 0 containers: []
	W1209 05:55:41.741523 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:55:41.741530 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:55:41.741666 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:55:41.768353 1437114 cri.go:89] found id: ""
	I1209 05:55:41.768465 1437114 logs.go:282] 0 containers: []
	W1209 05:55:41.768479 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:55:41.768490 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:55:41.768529 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:55:41.831484 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:55:41.823412   10488 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:41.824163   10488 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:41.825769   10488 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:41.826063   10488 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:41.827557   10488 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:55:41.823412   10488 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:41.824163   10488 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:41.825769   10488 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:41.826063   10488 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:41.827557   10488 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:55:41.831504 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:55:41.831517 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:55:41.857187 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:55:41.857222 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:55:41.887092 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:55:41.887123 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:55:41.943306 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:55:41.943341 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:55:44.461424 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:55:44.472240 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:55:44.472340 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:55:44.498935 1437114 cri.go:89] found id: ""
	I1209 05:55:44.498961 1437114 logs.go:282] 0 containers: []
	W1209 05:55:44.498970 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:55:44.498976 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:55:44.499034 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:55:44.523535 1437114 cri.go:89] found id: ""
	I1209 05:55:44.523564 1437114 logs.go:282] 0 containers: []
	W1209 05:55:44.523573 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:55:44.523579 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:55:44.523637 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:55:44.548432 1437114 cri.go:89] found id: ""
	I1209 05:55:44.548455 1437114 logs.go:282] 0 containers: []
	W1209 05:55:44.548463 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:55:44.548469 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:55:44.548526 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:55:44.573002 1437114 cri.go:89] found id: ""
	I1209 05:55:44.573024 1437114 logs.go:282] 0 containers: []
	W1209 05:55:44.573034 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:55:44.573040 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:55:44.573098 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:55:44.596807 1437114 cri.go:89] found id: ""
	I1209 05:55:44.596829 1437114 logs.go:282] 0 containers: []
	W1209 05:55:44.596838 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:55:44.596846 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:55:44.596901 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:55:44.624387 1437114 cri.go:89] found id: ""
	I1209 05:55:44.624456 1437114 logs.go:282] 0 containers: []
	W1209 05:55:44.624478 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:55:44.624492 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:55:44.624571 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:55:44.648117 1437114 cri.go:89] found id: ""
	I1209 05:55:44.648143 1437114 logs.go:282] 0 containers: []
	W1209 05:55:44.648151 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:55:44.648158 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:55:44.648229 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:55:44.671908 1437114 cri.go:89] found id: ""
	I1209 05:55:44.671939 1437114 logs.go:282] 0 containers: []
	W1209 05:55:44.671948 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:55:44.671972 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:55:44.671989 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:55:44.732458 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:55:44.732536 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:55:44.753248 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:55:44.753273 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:55:44.822117 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:55:44.814788   10605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:44.815161   10605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:44.816602   10605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:44.816898   10605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:44.818170   10605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:55:44.814788   10605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:44.815161   10605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:44.816602   10605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:44.816898   10605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:44.818170   10605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:55:44.822137 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:55:44.822149 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:55:44.848565 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:55:44.848600 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:55:47.376875 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:55:47.386961 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:55:47.387031 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:55:47.413420 1437114 cri.go:89] found id: ""
	I1209 05:55:47.413444 1437114 logs.go:282] 0 containers: []
	W1209 05:55:47.413452 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:55:47.413458 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:55:47.413519 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:55:47.441969 1437114 cri.go:89] found id: ""
	I1209 05:55:47.442001 1437114 logs.go:282] 0 containers: []
	W1209 05:55:47.442010 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:55:47.442016 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:55:47.442081 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:55:47.465166 1437114 cri.go:89] found id: ""
	I1209 05:55:47.465195 1437114 logs.go:282] 0 containers: []
	W1209 05:55:47.465210 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:55:47.465216 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:55:47.465283 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:55:47.493704 1437114 cri.go:89] found id: ""
	I1209 05:55:47.493730 1437114 logs.go:282] 0 containers: []
	W1209 05:55:47.493739 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:55:47.493745 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:55:47.493821 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:55:47.519554 1437114 cri.go:89] found id: ""
	I1209 05:55:47.519589 1437114 logs.go:282] 0 containers: []
	W1209 05:55:47.519598 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:55:47.519604 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:55:47.519671 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:55:47.549334 1437114 cri.go:89] found id: ""
	I1209 05:55:47.549367 1437114 logs.go:282] 0 containers: []
	W1209 05:55:47.549376 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:55:47.549383 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:55:47.549456 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:55:47.578946 1437114 cri.go:89] found id: ""
	I1209 05:55:47.578980 1437114 logs.go:282] 0 containers: []
	W1209 05:55:47.578989 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:55:47.578995 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:55:47.579062 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:55:47.603683 1437114 cri.go:89] found id: ""
	I1209 05:55:47.603716 1437114 logs.go:282] 0 containers: []
	W1209 05:55:47.603725 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:55:47.603734 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:55:47.603745 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:55:47.619447 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:55:47.619482 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:55:47.687529 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:55:47.675579   10710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:47.676174   10710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:47.679656   10710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:47.680257   10710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:47.681964   10710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:55:47.675579   10710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:47.676174   10710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:47.679656   10710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:47.680257   10710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:47.681964   10710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:55:47.687594 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:55:47.687641 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:55:47.715721 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:55:47.715792 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:55:47.745866 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:55:47.745889 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:55:50.305015 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:55:50.315642 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:55:50.315787 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:55:50.341274 1437114 cri.go:89] found id: ""
	I1209 05:55:50.341298 1437114 logs.go:282] 0 containers: []
	W1209 05:55:50.341306 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:55:50.341314 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:55:50.341370 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:55:50.366500 1437114 cri.go:89] found id: ""
	I1209 05:55:50.366533 1437114 logs.go:282] 0 containers: []
	W1209 05:55:50.366542 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:55:50.366548 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:55:50.366613 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:55:50.390751 1437114 cri.go:89] found id: ""
	I1209 05:55:50.390787 1437114 logs.go:282] 0 containers: []
	W1209 05:55:50.390796 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:55:50.390802 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:55:50.390867 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:55:50.418576 1437114 cri.go:89] found id: ""
	I1209 05:55:50.418601 1437114 logs.go:282] 0 containers: []
	W1209 05:55:50.418610 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:55:50.418616 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:55:50.418683 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:55:50.447207 1437114 cri.go:89] found id: ""
	I1209 05:55:50.447250 1437114 logs.go:282] 0 containers: []
	W1209 05:55:50.447261 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:55:50.447267 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:55:50.447339 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:55:50.476321 1437114 cri.go:89] found id: ""
	I1209 05:55:50.476346 1437114 logs.go:282] 0 containers: []
	W1209 05:55:50.476354 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:55:50.476372 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:55:50.476430 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:55:50.501331 1437114 cri.go:89] found id: ""
	I1209 05:55:50.501356 1437114 logs.go:282] 0 containers: []
	W1209 05:55:50.501365 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:55:50.501371 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:55:50.501439 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:55:50.525182 1437114 cri.go:89] found id: ""
	I1209 05:55:50.525207 1437114 logs.go:282] 0 containers: []
	W1209 05:55:50.525215 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:55:50.525224 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:55:50.525262 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:55:50.584512 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:55:50.584550 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:55:50.600341 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:55:50.600369 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:55:50.667248 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:55:50.658895   10823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:50.659509   10823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:50.661016   10823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:50.661529   10823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:50.663114   10823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:55:50.658895   10823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:50.659509   10823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:50.661016   10823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:50.661529   10823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:50.663114   10823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:55:50.667314 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:55:50.667346 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:55:50.695874 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:55:50.695911 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:55:53.232139 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:55:53.242299 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:55:53.242369 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:55:53.265738 1437114 cri.go:89] found id: ""
	I1209 05:55:53.265763 1437114 logs.go:282] 0 containers: []
	W1209 05:55:53.265771 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:55:53.265777 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:55:53.265834 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:55:53.289547 1437114 cri.go:89] found id: ""
	I1209 05:55:53.289571 1437114 logs.go:282] 0 containers: []
	W1209 05:55:53.289580 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:55:53.289586 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:55:53.289644 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:55:53.314432 1437114 cri.go:89] found id: ""
	I1209 05:55:53.314457 1437114 logs.go:282] 0 containers: []
	W1209 05:55:53.314466 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:55:53.314472 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:55:53.314529 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:55:53.338078 1437114 cri.go:89] found id: ""
	I1209 05:55:53.338100 1437114 logs.go:282] 0 containers: []
	W1209 05:55:53.338109 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:55:53.338115 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:55:53.338190 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:55:53.362597 1437114 cri.go:89] found id: ""
	I1209 05:55:53.362623 1437114 logs.go:282] 0 containers: []
	W1209 05:55:53.362632 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:55:53.362638 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:55:53.362700 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:55:53.387075 1437114 cri.go:89] found id: ""
	I1209 05:55:53.387100 1437114 logs.go:282] 0 containers: []
	W1209 05:55:53.387108 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:55:53.387115 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:55:53.387181 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:55:53.410813 1437114 cri.go:89] found id: ""
	I1209 05:55:53.410836 1437114 logs.go:282] 0 containers: []
	W1209 05:55:53.410845 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:55:53.410850 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:55:53.410910 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:55:53.439085 1437114 cri.go:89] found id: ""
	I1209 05:55:53.439107 1437114 logs.go:282] 0 containers: []
	W1209 05:55:53.439115 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:55:53.439124 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:55:53.439135 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:55:53.496416 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:55:53.496450 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:55:53.512950 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:55:53.512979 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:55:53.592134 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:55:53.583228   10933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:53.583903   10933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:53.585634   10933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:53.586183   10933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:53.587806   10933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:55:53.583228   10933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:53.583903   10933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:53.585634   10933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:53.586183   10933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:53.587806   10933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:55:53.592155 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:55:53.592168 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:55:53.620855 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:55:53.620901 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:55:56.151858 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:55:56.162360 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:55:56.162444 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:55:56.192447 1437114 cri.go:89] found id: ""
	I1209 05:55:56.192474 1437114 logs.go:282] 0 containers: []
	W1209 05:55:56.192482 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:55:56.192488 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:55:56.192545 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:55:56.230900 1437114 cri.go:89] found id: ""
	I1209 05:55:56.230927 1437114 logs.go:282] 0 containers: []
	W1209 05:55:56.230935 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:55:56.230941 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:55:56.231005 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:55:56.264649 1437114 cri.go:89] found id: ""
	I1209 05:55:56.264673 1437114 logs.go:282] 0 containers: []
	W1209 05:55:56.264683 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:55:56.264689 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:55:56.264748 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:55:56.287754 1437114 cri.go:89] found id: ""
	I1209 05:55:56.287780 1437114 logs.go:282] 0 containers: []
	W1209 05:55:56.287788 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:55:56.287794 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:55:56.287851 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:55:56.311939 1437114 cri.go:89] found id: ""
	I1209 05:55:56.311966 1437114 logs.go:282] 0 containers: []
	W1209 05:55:56.311974 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:55:56.311981 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:55:56.312071 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:55:56.336812 1437114 cri.go:89] found id: ""
	I1209 05:55:56.336847 1437114 logs.go:282] 0 containers: []
	W1209 05:55:56.336856 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:55:56.336862 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:55:56.336926 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:55:56.364355 1437114 cri.go:89] found id: ""
	I1209 05:55:56.364378 1437114 logs.go:282] 0 containers: []
	W1209 05:55:56.364387 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:55:56.364394 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:55:56.364451 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:55:56.388220 1437114 cri.go:89] found id: ""
	I1209 05:55:56.388242 1437114 logs.go:282] 0 containers: []
	W1209 05:55:56.388251 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:55:56.388260 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:55:56.388272 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:55:56.451922 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:55:56.443739   11039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:56.444234   11039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:56.445911   11039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:56.446440   11039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:56.448091   11039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:55:56.443739   11039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:56.444234   11039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:56.445911   11039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:56.446440   11039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:56.448091   11039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:55:56.451941 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:55:56.451955 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:55:56.477213 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:55:56.477256 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:55:56.504874 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:55:56.504908 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:55:56.561753 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:55:56.561793 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:55:59.078916 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:55:59.089470 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:55:59.089545 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:55:59.113298 1437114 cri.go:89] found id: ""
	I1209 05:55:59.113324 1437114 logs.go:282] 0 containers: []
	W1209 05:55:59.113332 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:55:59.113339 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:55:59.113402 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:55:59.141250 1437114 cri.go:89] found id: ""
	I1209 05:55:59.141278 1437114 logs.go:282] 0 containers: []
	W1209 05:55:59.141286 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:55:59.141292 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:55:59.141351 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:55:59.170020 1437114 cri.go:89] found id: ""
	I1209 05:55:59.170044 1437114 logs.go:282] 0 containers: []
	W1209 05:55:59.170052 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:55:59.170059 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:55:59.170122 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:55:59.210757 1437114 cri.go:89] found id: ""
	I1209 05:55:59.210792 1437114 logs.go:282] 0 containers: []
	W1209 05:55:59.210801 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:55:59.210808 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:55:59.210873 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:55:59.237433 1437114 cri.go:89] found id: ""
	I1209 05:55:59.237470 1437114 logs.go:282] 0 containers: []
	W1209 05:55:59.237479 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:55:59.237486 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:55:59.237551 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:55:59.263923 1437114 cri.go:89] found id: ""
	I1209 05:55:59.263959 1437114 logs.go:282] 0 containers: []
	W1209 05:55:59.263968 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:55:59.263975 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:55:59.264071 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:55:59.288850 1437114 cri.go:89] found id: ""
	I1209 05:55:59.288916 1437114 logs.go:282] 0 containers: []
	W1209 05:55:59.288940 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:55:59.288954 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:55:59.289029 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:55:59.316320 1437114 cri.go:89] found id: ""
	I1209 05:55:59.316347 1437114 logs.go:282] 0 containers: []
	W1209 05:55:59.316356 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:55:59.316365 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:55:59.316376 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:55:59.383644 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:55:59.373968   11152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:59.374816   11152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:59.376482   11152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:59.376830   11152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:59.378974   11152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:55:59.373968   11152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:59.374816   11152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:59.376482   11152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:59.376830   11152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:59.378974   11152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:55:59.383666 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:55:59.383680 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:55:59.409556 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:55:59.409591 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:55:59.440707 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:55:59.440737 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:55:59.496851 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:55:59.496887 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:56:02.013397 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:56:02.023815 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:56:02.023883 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:56:02.054212 1437114 cri.go:89] found id: ""
	I1209 05:56:02.054240 1437114 logs.go:282] 0 containers: []
	W1209 05:56:02.054249 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:56:02.054255 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:56:02.054323 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:56:02.079245 1437114 cri.go:89] found id: ""
	I1209 05:56:02.079274 1437114 logs.go:282] 0 containers: []
	W1209 05:56:02.079283 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:56:02.079289 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:56:02.079347 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:56:02.104356 1437114 cri.go:89] found id: ""
	I1209 05:56:02.104399 1437114 logs.go:282] 0 containers: []
	W1209 05:56:02.104408 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:56:02.104415 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:56:02.104478 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:56:02.129688 1437114 cri.go:89] found id: ""
	I1209 05:56:02.129753 1437114 logs.go:282] 0 containers: []
	W1209 05:56:02.129777 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:56:02.129795 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:56:02.129886 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:56:02.159435 1437114 cri.go:89] found id: ""
	I1209 05:56:02.159463 1437114 logs.go:282] 0 containers: []
	W1209 05:56:02.159471 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:56:02.159478 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:56:02.159537 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:56:02.193945 1437114 cri.go:89] found id: ""
	I1209 05:56:02.193969 1437114 logs.go:282] 0 containers: []
	W1209 05:56:02.193987 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:56:02.193994 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:56:02.194093 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:56:02.230499 1437114 cri.go:89] found id: ""
	I1209 05:56:02.230528 1437114 logs.go:282] 0 containers: []
	W1209 05:56:02.230542 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:56:02.230549 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:56:02.230650 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:56:02.261955 1437114 cri.go:89] found id: ""
	I1209 05:56:02.262021 1437114 logs.go:282] 0 containers: []
	W1209 05:56:02.262046 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:56:02.262063 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:56:02.262075 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:56:02.278208 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:56:02.278245 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:56:02.342511 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:56:02.334138   11267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:02.334823   11267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:02.336452   11267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:02.337017   11267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:02.338543   11267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:56:02.334138   11267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:02.334823   11267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:02.336452   11267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:02.337017   11267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:02.338543   11267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:56:02.342581 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:56:02.342603 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:56:02.367883 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:56:02.367920 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:56:02.398560 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:56:02.398587 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:56:04.956142 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:56:04.966664 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:56:04.966728 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:56:05.000769 1437114 cri.go:89] found id: ""
	I1209 05:56:05.000792 1437114 logs.go:282] 0 containers: []
	W1209 05:56:05.000801 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:56:05.000807 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:56:05.000868 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:56:05.030686 1437114 cri.go:89] found id: ""
	I1209 05:56:05.030713 1437114 logs.go:282] 0 containers: []
	W1209 05:56:05.030726 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:56:05.030733 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:56:05.030792 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:56:05.055515 1437114 cri.go:89] found id: ""
	I1209 05:56:05.055541 1437114 logs.go:282] 0 containers: []
	W1209 05:56:05.055550 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:56:05.055556 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:56:05.055614 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:56:05.080557 1437114 cri.go:89] found id: ""
	I1209 05:56:05.080584 1437114 logs.go:282] 0 containers: []
	W1209 05:56:05.080593 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:56:05.080599 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:56:05.080659 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:56:05.106686 1437114 cri.go:89] found id: ""
	I1209 05:56:05.106714 1437114 logs.go:282] 0 containers: []
	W1209 05:56:05.106724 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:56:05.106731 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:56:05.106792 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:56:05.131985 1437114 cri.go:89] found id: ""
	I1209 05:56:05.132044 1437114 logs.go:282] 0 containers: []
	W1209 05:56:05.132053 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:56:05.132060 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:56:05.132127 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:56:05.158936 1437114 cri.go:89] found id: ""
	I1209 05:56:05.159002 1437114 logs.go:282] 0 containers: []
	W1209 05:56:05.159027 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:56:05.159045 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:56:05.159134 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:56:05.186586 1437114 cri.go:89] found id: ""
	I1209 05:56:05.186658 1437114 logs.go:282] 0 containers: []
	W1209 05:56:05.186682 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:56:05.186704 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:56:05.186745 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:56:05.252531 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:56:05.252568 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:56:05.268794 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:56:05.268823 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:56:05.330847 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:56:05.322209   11379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:05.322901   11379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:05.324496   11379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:05.325041   11379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:05.326643   11379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:56:05.322209   11379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:05.322901   11379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:05.324496   11379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:05.325041   11379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:05.326643   11379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:56:05.330870 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:56:05.330882 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:56:05.356845 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:56:05.356877 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:56:07.894100 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:56:07.904726 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:56:07.904808 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:56:07.934685 1437114 cri.go:89] found id: ""
	I1209 05:56:07.934707 1437114 logs.go:282] 0 containers: []
	W1209 05:56:07.934715 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:56:07.934727 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:56:07.934786 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:56:07.966688 1437114 cri.go:89] found id: ""
	I1209 05:56:07.966715 1437114 logs.go:282] 0 containers: []
	W1209 05:56:07.966724 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:56:07.966730 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:56:07.966791 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:56:07.997688 1437114 cri.go:89] found id: ""
	I1209 05:56:07.997718 1437114 logs.go:282] 0 containers: []
	W1209 05:56:07.997727 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:56:07.997733 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:56:07.997794 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:56:08.028703 1437114 cri.go:89] found id: ""
	I1209 05:56:08.028738 1437114 logs.go:282] 0 containers: []
	W1209 05:56:08.028748 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:56:08.028756 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:56:08.028836 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:56:08.055186 1437114 cri.go:89] found id: ""
	I1209 05:56:08.055216 1437114 logs.go:282] 0 containers: []
	W1209 05:56:08.055225 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:56:08.055232 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:56:08.055298 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:56:08.081977 1437114 cri.go:89] found id: ""
	I1209 05:56:08.082005 1437114 logs.go:282] 0 containers: []
	W1209 05:56:08.082014 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:56:08.082020 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:56:08.082094 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:56:08.106694 1437114 cri.go:89] found id: ""
	I1209 05:56:08.106719 1437114 logs.go:282] 0 containers: []
	W1209 05:56:08.106728 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:56:08.106735 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:56:08.106794 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:56:08.131242 1437114 cri.go:89] found id: ""
	I1209 05:56:08.131266 1437114 logs.go:282] 0 containers: []
	W1209 05:56:08.131274 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:56:08.131284 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:56:08.131296 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:56:08.200236 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:56:08.191954   11490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:08.192809   11490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:08.194381   11490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:08.194676   11490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:08.196205   11490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:56:08.191954   11490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:08.192809   11490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:08.194381   11490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:08.194676   11490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:08.196205   11490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:56:08.200261 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:56:08.200275 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:56:08.228642 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:56:08.228684 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:56:08.262181 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:56:08.262210 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:56:08.316796 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:56:08.316828 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:56:10.832826 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:56:10.843625 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:56:10.843696 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:56:10.867741 1437114 cri.go:89] found id: ""
	I1209 05:56:10.867808 1437114 logs.go:282] 0 containers: []
	W1209 05:56:10.867832 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:56:10.867854 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:56:10.867940 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:56:10.893251 1437114 cri.go:89] found id: ""
	I1209 05:56:10.893284 1437114 logs.go:282] 0 containers: []
	W1209 05:56:10.893292 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:56:10.893298 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:56:10.893357 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:56:10.921874 1437114 cri.go:89] found id: ""
	I1209 05:56:10.921897 1437114 logs.go:282] 0 containers: []
	W1209 05:56:10.921906 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:56:10.921912 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:56:10.921977 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:56:10.948235 1437114 cri.go:89] found id: ""
	I1209 05:56:10.948257 1437114 logs.go:282] 0 containers: []
	W1209 05:56:10.948272 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:56:10.948279 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:56:10.948337 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:56:10.977204 1437114 cri.go:89] found id: ""
	I1209 05:56:10.977226 1437114 logs.go:282] 0 containers: []
	W1209 05:56:10.977234 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:56:10.977239 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:56:10.977298 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:56:11.011653 1437114 cri.go:89] found id: ""
	I1209 05:56:11.011677 1437114 logs.go:282] 0 containers: []
	W1209 05:56:11.011685 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:56:11.011692 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:56:11.011753 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:56:11.038552 1437114 cri.go:89] found id: ""
	I1209 05:56:11.038575 1437114 logs.go:282] 0 containers: []
	W1209 05:56:11.038584 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:56:11.038589 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:56:11.038648 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:56:11.068058 1437114 cri.go:89] found id: ""
	I1209 05:56:11.068081 1437114 logs.go:282] 0 containers: []
	W1209 05:56:11.068089 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:56:11.068098 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:56:11.068109 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:56:11.124172 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:56:11.124208 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:56:11.140275 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:56:11.140316 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:56:11.220317 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:56:11.212396   11608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:11.213001   11608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:11.214543   11608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:11.215016   11608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:11.216494   11608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:56:11.212396   11608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:11.213001   11608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:11.214543   11608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:11.215016   11608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:11.216494   11608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:56:11.220349 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:56:11.220362 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:56:11.245629 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:56:11.245662 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:56:13.776003 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:56:13.786369 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:56:13.786448 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:56:13.809520 1437114 cri.go:89] found id: ""
	I1209 05:56:13.809544 1437114 logs.go:282] 0 containers: []
	W1209 05:56:13.809553 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:56:13.809559 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:56:13.809618 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:56:13.833347 1437114 cri.go:89] found id: ""
	I1209 05:56:13.833370 1437114 logs.go:282] 0 containers: []
	W1209 05:56:13.833378 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:56:13.833384 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:56:13.833446 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:56:13.857799 1437114 cri.go:89] found id: ""
	I1209 05:56:13.857830 1437114 logs.go:282] 0 containers: []
	W1209 05:56:13.857840 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:56:13.857846 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:56:13.857906 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:56:13.882625 1437114 cri.go:89] found id: ""
	I1209 05:56:13.882658 1437114 logs.go:282] 0 containers: []
	W1209 05:56:13.882667 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:56:13.882673 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:56:13.882742 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:56:13.910846 1437114 cri.go:89] found id: ""
	I1209 05:56:13.910880 1437114 logs.go:282] 0 containers: []
	W1209 05:56:13.910889 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:56:13.910895 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:56:13.910962 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:56:13.942418 1437114 cri.go:89] found id: ""
	I1209 05:56:13.942483 1437114 logs.go:282] 0 containers: []
	W1209 05:56:13.942510 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:56:13.942528 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:56:13.942615 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:56:13.972617 1437114 cri.go:89] found id: ""
	I1209 05:56:13.972686 1437114 logs.go:282] 0 containers: []
	W1209 05:56:13.972710 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:56:13.972728 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:56:13.972814 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:56:14.010643 1437114 cri.go:89] found id: ""
	I1209 05:56:14.010672 1437114 logs.go:282] 0 containers: []
	W1209 05:56:14.010690 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:56:14.010712 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:56:14.010743 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:56:14.045403 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:56:14.045489 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:56:14.103757 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:56:14.103793 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:56:14.119622 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:56:14.119648 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:56:14.199726 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:56:14.184447   11733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:14.191243   11733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:14.191781   11733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:14.193355   11733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:14.193912   11733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:56:14.184447   11733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:14.191243   11733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:14.191781   11733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:14.193355   11733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:14.193912   11733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:56:14.199794 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:56:14.199821 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:56:16.729940 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:56:16.740423 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:56:16.740497 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:56:16.765729 1437114 cri.go:89] found id: ""
	I1209 05:56:16.765755 1437114 logs.go:282] 0 containers: []
	W1209 05:56:16.765763 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:56:16.765770 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:56:16.765831 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:56:16.793724 1437114 cri.go:89] found id: ""
	I1209 05:56:16.793750 1437114 logs.go:282] 0 containers: []
	W1209 05:56:16.793759 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:56:16.793765 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:56:16.793824 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:56:16.821402 1437114 cri.go:89] found id: ""
	I1209 05:56:16.821429 1437114 logs.go:282] 0 containers: []
	W1209 05:56:16.821437 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:56:16.821444 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:56:16.821504 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:56:16.846074 1437114 cri.go:89] found id: ""
	I1209 05:56:16.846101 1437114 logs.go:282] 0 containers: []
	W1209 05:56:16.846110 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:56:16.846116 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:56:16.846175 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:56:16.870665 1437114 cri.go:89] found id: ""
	I1209 05:56:16.870689 1437114 logs.go:282] 0 containers: []
	W1209 05:56:16.870698 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:56:16.870705 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:56:16.870785 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:56:16.894509 1437114 cri.go:89] found id: ""
	I1209 05:56:16.894542 1437114 logs.go:282] 0 containers: []
	W1209 05:56:16.894550 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:56:16.894557 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:56:16.894651 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:56:16.921935 1437114 cri.go:89] found id: ""
	I1209 05:56:16.921962 1437114 logs.go:282] 0 containers: []
	W1209 05:56:16.921971 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:56:16.921977 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:56:16.922049 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:56:16.950536 1437114 cri.go:89] found id: ""
	I1209 05:56:16.950570 1437114 logs.go:282] 0 containers: []
	W1209 05:56:16.950579 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:56:16.950588 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:56:16.950599 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:56:17.008406 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:56:17.008442 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:56:17.024072 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:56:17.024098 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:56:17.089436 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:56:17.080816   11835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:17.081612   11835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:17.083298   11835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:17.083818   11835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:17.085479   11835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:56:17.080816   11835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:17.081612   11835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:17.083298   11835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:17.083818   11835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:17.085479   11835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:56:17.089456 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:56:17.089468 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:56:17.114751 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:56:17.114785 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:56:19.649189 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:56:19.659355 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:56:19.659709 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:56:19.687359 1437114 cri.go:89] found id: ""
	I1209 05:56:19.687393 1437114 logs.go:282] 0 containers: []
	W1209 05:56:19.687402 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:56:19.687408 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:56:19.687482 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:56:19.711167 1437114 cri.go:89] found id: ""
	I1209 05:56:19.711241 1437114 logs.go:282] 0 containers: []
	W1209 05:56:19.711264 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:56:19.711282 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:56:19.711361 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:56:19.734776 1437114 cri.go:89] found id: ""
	I1209 05:56:19.734843 1437114 logs.go:282] 0 containers: []
	W1209 05:56:19.734868 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:56:19.734886 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:56:19.734978 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:56:19.758075 1437114 cri.go:89] found id: ""
	I1209 05:56:19.758101 1437114 logs.go:282] 0 containers: []
	W1209 05:56:19.758111 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:56:19.758117 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:56:19.758191 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:56:19.781866 1437114 cri.go:89] found id: ""
	I1209 05:56:19.781889 1437114 logs.go:282] 0 containers: []
	W1209 05:56:19.781897 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:56:19.781903 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:56:19.782011 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:56:19.806779 1437114 cri.go:89] found id: ""
	I1209 05:56:19.806811 1437114 logs.go:282] 0 containers: []
	W1209 05:56:19.806820 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:56:19.806827 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:56:19.806896 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:56:19.830892 1437114 cri.go:89] found id: ""
	I1209 05:56:19.830931 1437114 logs.go:282] 0 containers: []
	W1209 05:56:19.830940 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:56:19.830946 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:56:19.831013 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:56:19.855119 1437114 cri.go:89] found id: ""
	I1209 05:56:19.855151 1437114 logs.go:282] 0 containers: []
	W1209 05:56:19.855160 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:56:19.855168 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:56:19.855180 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:56:19.918437 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:56:19.910860   11934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:19.911393   11934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:19.912883   11934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:19.913323   11934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:19.914743   11934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:56:19.910860   11934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:19.911393   11934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:19.912883   11934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:19.913323   11934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:19.914743   11934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:56:19.918456 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:56:19.918468 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:56:19.948986 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:56:19.949022 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:56:19.983513 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:56:19.983543 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:56:20.044570 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:56:20.044611 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:56:22.561138 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:56:22.571631 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:56:22.571701 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:56:22.597481 1437114 cri.go:89] found id: ""
	I1209 05:56:22.597507 1437114 logs.go:282] 0 containers: []
	W1209 05:56:22.597516 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:56:22.597522 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:56:22.597583 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:56:22.620910 1437114 cri.go:89] found id: ""
	I1209 05:56:22.620934 1437114 logs.go:282] 0 containers: []
	W1209 05:56:22.620942 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:56:22.620948 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:56:22.621010 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:56:22.645762 1437114 cri.go:89] found id: ""
	I1209 05:56:22.645786 1437114 logs.go:282] 0 containers: []
	W1209 05:56:22.645794 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:56:22.645802 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:56:22.645860 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:56:22.674030 1437114 cri.go:89] found id: ""
	I1209 05:56:22.674055 1437114 logs.go:282] 0 containers: []
	W1209 05:56:22.674063 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:56:22.674069 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:56:22.674129 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:56:22.697420 1437114 cri.go:89] found id: ""
	I1209 05:56:22.697483 1437114 logs.go:282] 0 containers: []
	W1209 05:56:22.697498 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:56:22.697505 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:56:22.697572 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:56:22.721275 1437114 cri.go:89] found id: ""
	I1209 05:56:22.721303 1437114 logs.go:282] 0 containers: []
	W1209 05:56:22.721311 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:56:22.721318 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:56:22.721375 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:56:22.751174 1437114 cri.go:89] found id: ""
	I1209 05:56:22.751207 1437114 logs.go:282] 0 containers: []
	W1209 05:56:22.751216 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:56:22.751223 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:56:22.751297 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:56:22.783334 1437114 cri.go:89] found id: ""
	I1209 05:56:22.783359 1437114 logs.go:282] 0 containers: []
	W1209 05:56:22.783368 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:56:22.783377 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:56:22.783388 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:56:22.798903 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:56:22.798931 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:56:22.863930 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:56:22.855168   12050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:22.855903   12050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:22.857473   12050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:22.858541   12050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:22.859308   12050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:56:22.855168   12050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:22.855903   12050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:22.857473   12050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:22.858541   12050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:22.859308   12050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:56:22.863951 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:56:22.863964 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:56:22.889010 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:56:22.889044 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:56:22.917472 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:56:22.917497 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:56:25.477751 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:56:25.488155 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:56:25.488227 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:56:25.513691 1437114 cri.go:89] found id: ""
	I1209 05:56:25.513726 1437114 logs.go:282] 0 containers: []
	W1209 05:56:25.513735 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:56:25.513742 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:56:25.513815 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:56:25.538394 1437114 cri.go:89] found id: ""
	I1209 05:56:25.538426 1437114 logs.go:282] 0 containers: []
	W1209 05:56:25.538434 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:56:25.538441 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:56:25.538507 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:56:25.565992 1437114 cri.go:89] found id: ""
	I1209 05:56:25.566014 1437114 logs.go:282] 0 containers: []
	W1209 05:56:25.566023 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:56:25.566028 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:56:25.566084 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:56:25.594238 1437114 cri.go:89] found id: ""
	I1209 05:56:25.594273 1437114 logs.go:282] 0 containers: []
	W1209 05:56:25.594283 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:56:25.594289 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:56:25.594357 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:56:25.618528 1437114 cri.go:89] found id: ""
	I1209 05:56:25.618554 1437114 logs.go:282] 0 containers: []
	W1209 05:56:25.618562 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:56:25.618569 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:56:25.618630 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:56:25.645761 1437114 cri.go:89] found id: ""
	I1209 05:56:25.645793 1437114 logs.go:282] 0 containers: []
	W1209 05:56:25.645802 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:56:25.645809 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:56:25.645868 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:56:25.673275 1437114 cri.go:89] found id: ""
	I1209 05:56:25.673303 1437114 logs.go:282] 0 containers: []
	W1209 05:56:25.673313 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:56:25.673320 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:56:25.673378 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:56:25.698776 1437114 cri.go:89] found id: ""
	I1209 05:56:25.698801 1437114 logs.go:282] 0 containers: []
	W1209 05:56:25.698810 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:56:25.698819 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:56:25.698831 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:56:25.758726 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:56:25.758763 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:56:25.774459 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:56:25.774498 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:56:25.837634 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:56:25.829791   12165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:25.830357   12165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:25.831894   12165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:25.832310   12165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:25.833747   12165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:56:25.829791   12165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:25.830357   12165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:25.831894   12165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:25.832310   12165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:25.833747   12165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:56:25.837654 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:56:25.837666 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:56:25.863059 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:56:25.863089 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:56:28.390209 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:56:28.400783 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:56:28.400858 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:56:28.431158 1437114 cri.go:89] found id: ""
	I1209 05:56:28.431186 1437114 logs.go:282] 0 containers: []
	W1209 05:56:28.431195 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:56:28.431201 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:56:28.431257 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:56:28.466252 1437114 cri.go:89] found id: ""
	I1209 05:56:28.466304 1437114 logs.go:282] 0 containers: []
	W1209 05:56:28.466313 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:56:28.466319 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:56:28.466387 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:56:28.495101 1437114 cri.go:89] found id: ""
	I1209 05:56:28.495128 1437114 logs.go:282] 0 containers: []
	W1209 05:56:28.495135 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:56:28.495141 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:56:28.495205 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:56:28.519814 1437114 cri.go:89] found id: ""
	I1209 05:56:28.519840 1437114 logs.go:282] 0 containers: []
	W1209 05:56:28.519848 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:56:28.519854 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:56:28.519917 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:56:28.545987 1437114 cri.go:89] found id: ""
	I1209 05:56:28.546014 1437114 logs.go:282] 0 containers: []
	W1209 05:56:28.546022 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:56:28.546029 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:56:28.546087 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:56:28.569653 1437114 cri.go:89] found id: ""
	I1209 05:56:28.569677 1437114 logs.go:282] 0 containers: []
	W1209 05:56:28.569686 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:56:28.569693 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:56:28.569750 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:56:28.592506 1437114 cri.go:89] found id: ""
	I1209 05:56:28.592531 1437114 logs.go:282] 0 containers: []
	W1209 05:56:28.592540 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:56:28.592546 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:56:28.592603 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:56:28.616082 1437114 cri.go:89] found id: ""
	I1209 05:56:28.616109 1437114 logs.go:282] 0 containers: []
	W1209 05:56:28.616118 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:56:28.616127 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:56:28.616140 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:56:28.641671 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:56:28.641702 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:56:28.667950 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:56:28.667976 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:56:28.723545 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:56:28.723579 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:56:28.739105 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:56:28.739133 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:56:28.799453 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:56:28.791383   12291 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:28.792152   12291 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:28.793337   12291 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:28.793895   12291 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:28.795399   12291 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:56:28.791383   12291 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:28.792152   12291 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:28.793337   12291 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:28.793895   12291 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:28.795399   12291 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:56:31.300174 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:56:31.310601 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:56:31.310671 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:56:31.335264 1437114 cri.go:89] found id: ""
	I1209 05:56:31.335286 1437114 logs.go:282] 0 containers: []
	W1209 05:56:31.335295 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:56:31.335301 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:56:31.335359 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:56:31.359354 1437114 cri.go:89] found id: ""
	I1209 05:56:31.359377 1437114 logs.go:282] 0 containers: []
	W1209 05:56:31.359386 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:56:31.359392 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:56:31.359451 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:56:31.385360 1437114 cri.go:89] found id: ""
	I1209 05:56:31.385383 1437114 logs.go:282] 0 containers: []
	W1209 05:56:31.385392 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:56:31.385398 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:56:31.385463 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:56:31.410224 1437114 cri.go:89] found id: ""
	I1209 05:56:31.410250 1437114 logs.go:282] 0 containers: []
	W1209 05:56:31.410258 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:56:31.410265 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:56:31.410359 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:56:31.451992 1437114 cri.go:89] found id: ""
	I1209 05:56:31.452040 1437114 logs.go:282] 0 containers: []
	W1209 05:56:31.452049 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:56:31.452056 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:56:31.452116 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:56:31.484950 1437114 cri.go:89] found id: ""
	I1209 05:56:31.484979 1437114 logs.go:282] 0 containers: []
	W1209 05:56:31.484987 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:56:31.484994 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:56:31.485052 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:56:31.518900 1437114 cri.go:89] found id: ""
	I1209 05:56:31.518929 1437114 logs.go:282] 0 containers: []
	W1209 05:56:31.518938 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:56:31.518944 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:56:31.519004 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:56:31.542368 1437114 cri.go:89] found id: ""
	I1209 05:56:31.542398 1437114 logs.go:282] 0 containers: []
	W1209 05:56:31.542406 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:56:31.542414 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:56:31.542426 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:56:31.597391 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:56:31.597426 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:56:31.613542 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:56:31.613568 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:56:31.675768 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:56:31.667793   12388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:31.668366   12388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:31.670085   12388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:31.670523   12388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:31.672049   12388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:56:31.667793   12388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:31.668366   12388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:31.670085   12388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:31.670523   12388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:31.672049   12388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:56:31.675790 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:56:31.675801 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:56:31.705823 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:56:31.705860 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:56:34.233697 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:56:34.244491 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:56:34.244562 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:56:34.269357 1437114 cri.go:89] found id: ""
	I1209 05:56:34.269382 1437114 logs.go:282] 0 containers: []
	W1209 05:56:34.269393 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:56:34.269399 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:56:34.269455 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:56:34.298358 1437114 cri.go:89] found id: ""
	I1209 05:56:34.298389 1437114 logs.go:282] 0 containers: []
	W1209 05:56:34.298398 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:56:34.298404 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:56:34.298463 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:56:34.323280 1437114 cri.go:89] found id: ""
	I1209 05:56:34.323301 1437114 logs.go:282] 0 containers: []
	W1209 05:56:34.323309 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:56:34.323315 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:56:34.323372 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:56:34.347068 1437114 cri.go:89] found id: ""
	I1209 05:56:34.347144 1437114 logs.go:282] 0 containers: []
	W1209 05:56:34.347166 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:56:34.347185 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:56:34.347268 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:56:34.370494 1437114 cri.go:89] found id: ""
	I1209 05:56:34.370519 1437114 logs.go:282] 0 containers: []
	W1209 05:56:34.370528 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:56:34.370534 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:56:34.370593 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:56:34.394561 1437114 cri.go:89] found id: ""
	I1209 05:56:34.394586 1437114 logs.go:282] 0 containers: []
	W1209 05:56:34.394594 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:56:34.394601 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:56:34.394665 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:56:34.418680 1437114 cri.go:89] found id: ""
	I1209 05:56:34.418708 1437114 logs.go:282] 0 containers: []
	W1209 05:56:34.418717 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:56:34.418723 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:56:34.418781 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:56:34.456783 1437114 cri.go:89] found id: ""
	I1209 05:56:34.456811 1437114 logs.go:282] 0 containers: []
	W1209 05:56:34.456819 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:56:34.456828 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:56:34.456839 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:56:34.520119 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:56:34.520160 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:56:34.536245 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:56:34.536271 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:56:34.598782 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:56:34.590200   12497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:34.590688   12497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:34.592324   12497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:34.592957   12497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:34.593923   12497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:56:34.590200   12497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:34.590688   12497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:34.592324   12497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:34.592957   12497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:34.593923   12497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:56:34.598802 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:56:34.598813 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:56:34.623426 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:56:34.623456 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:56:37.156294 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:56:37.167303 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:56:37.167376 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:56:37.213639 1437114 cri.go:89] found id: ""
	I1209 05:56:37.213661 1437114 logs.go:282] 0 containers: []
	W1209 05:56:37.213670 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:56:37.213676 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:56:37.213734 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:56:37.251381 1437114 cri.go:89] found id: ""
	I1209 05:56:37.251451 1437114 logs.go:282] 0 containers: []
	W1209 05:56:37.251472 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:56:37.251489 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:56:37.251577 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:56:37.276652 1437114 cri.go:89] found id: ""
	I1209 05:56:37.276683 1437114 logs.go:282] 0 containers: []
	W1209 05:56:37.276718 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:56:37.276730 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:56:37.276807 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:56:37.306291 1437114 cri.go:89] found id: ""
	I1209 05:56:37.306355 1437114 logs.go:282] 0 containers: []
	W1209 05:56:37.306378 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:56:37.306397 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:56:37.306480 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:56:37.330690 1437114 cri.go:89] found id: ""
	I1209 05:56:37.330761 1437114 logs.go:282] 0 containers: []
	W1209 05:56:37.330784 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:56:37.330803 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:56:37.330891 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:56:37.360974 1437114 cri.go:89] found id: ""
	I1209 05:56:37.360996 1437114 logs.go:282] 0 containers: []
	W1209 05:56:37.361005 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:56:37.361011 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:56:37.361067 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:56:37.385070 1437114 cri.go:89] found id: ""
	I1209 05:56:37.385134 1437114 logs.go:282] 0 containers: []
	W1209 05:56:37.385149 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:56:37.385157 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:56:37.385214 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:56:37.408838 1437114 cri.go:89] found id: ""
	I1209 05:56:37.408872 1437114 logs.go:282] 0 containers: []
	W1209 05:56:37.408881 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:56:37.408890 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:56:37.408904 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:56:37.470471 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:56:37.470552 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:56:37.490560 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:56:37.490636 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:56:37.566595 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:56:37.558458   12607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:37.559098   12607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:37.560793   12607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:37.561249   12607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:37.562761   12607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:56:37.558458   12607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:37.559098   12607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:37.560793   12607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:37.561249   12607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:37.562761   12607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:56:37.566616 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:56:37.566629 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:56:37.591926 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:56:37.591966 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:56:40.120818 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:56:40.132357 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:56:40.132434 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:56:40.159057 1437114 cri.go:89] found id: ""
	I1209 05:56:40.159127 1437114 logs.go:282] 0 containers: []
	W1209 05:56:40.159150 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:56:40.159172 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:56:40.159260 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:56:40.194739 1437114 cri.go:89] found id: ""
	I1209 05:56:40.194762 1437114 logs.go:282] 0 containers: []
	W1209 05:56:40.194770 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:56:40.194777 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:56:40.194842 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:56:40.229613 1437114 cri.go:89] found id: ""
	I1209 05:56:40.229642 1437114 logs.go:282] 0 containers: []
	W1209 05:56:40.229651 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:56:40.229657 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:56:40.229720 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:56:40.266599 1437114 cri.go:89] found id: ""
	I1209 05:56:40.266622 1437114 logs.go:282] 0 containers: []
	W1209 05:56:40.266631 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:56:40.266643 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:56:40.266705 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:56:40.293941 1437114 cri.go:89] found id: ""
	I1209 05:56:40.293964 1437114 logs.go:282] 0 containers: []
	W1209 05:56:40.293973 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:56:40.293979 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:56:40.294037 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:56:40.319374 1437114 cri.go:89] found id: ""
	I1209 05:56:40.319407 1437114 logs.go:282] 0 containers: []
	W1209 05:56:40.319416 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:56:40.319423 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:56:40.319497 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:56:40.344221 1437114 cri.go:89] found id: ""
	I1209 05:56:40.344254 1437114 logs.go:282] 0 containers: []
	W1209 05:56:40.344263 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:56:40.344268 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:56:40.344333 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:56:40.369033 1437114 cri.go:89] found id: ""
	I1209 05:56:40.369056 1437114 logs.go:282] 0 containers: []
	W1209 05:56:40.369066 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:56:40.369076 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:56:40.369088 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:56:40.398480 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:56:40.398506 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:56:40.454913 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:56:40.454992 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:56:40.471549 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:56:40.471617 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:56:40.537419 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:56:40.529111   12732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:40.529745   12732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:40.531419   12732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:40.532052   12732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:40.533493   12732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:56:40.529111   12732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:40.529745   12732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:40.531419   12732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:40.532052   12732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:40.533493   12732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:56:40.537440 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:56:40.537452 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:56:43.063560 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:56:43.074056 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:56:43.074128 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:56:43.098443 1437114 cri.go:89] found id: ""
	I1209 05:56:43.098467 1437114 logs.go:282] 0 containers: []
	W1209 05:56:43.098476 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:56:43.098483 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:56:43.098543 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:56:43.123378 1437114 cri.go:89] found id: ""
	I1209 05:56:43.123405 1437114 logs.go:282] 0 containers: []
	W1209 05:56:43.123414 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:56:43.123420 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:56:43.123483 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:56:43.152283 1437114 cri.go:89] found id: ""
	I1209 05:56:43.152313 1437114 logs.go:282] 0 containers: []
	W1209 05:56:43.152322 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:56:43.152329 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:56:43.152389 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:56:43.176720 1437114 cri.go:89] found id: ""
	I1209 05:56:43.176744 1437114 logs.go:282] 0 containers: []
	W1209 05:56:43.176752 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:56:43.176759 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:56:43.176816 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:56:43.202038 1437114 cri.go:89] found id: ""
	I1209 05:56:43.202066 1437114 logs.go:282] 0 containers: []
	W1209 05:56:43.202074 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:56:43.202081 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:56:43.202136 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:56:43.231595 1437114 cri.go:89] found id: ""
	I1209 05:56:43.231620 1437114 logs.go:282] 0 containers: []
	W1209 05:56:43.231629 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:56:43.231636 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:56:43.231693 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:56:43.261330 1437114 cri.go:89] found id: ""
	I1209 05:56:43.261351 1437114 logs.go:282] 0 containers: []
	W1209 05:56:43.261359 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:56:43.261365 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:56:43.261422 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:56:43.290154 1437114 cri.go:89] found id: ""
	I1209 05:56:43.290175 1437114 logs.go:282] 0 containers: []
	W1209 05:56:43.290183 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:56:43.290192 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:56:43.290204 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:56:43.318398 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:56:43.318424 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:56:43.377076 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:56:43.377112 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:56:43.392846 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:56:43.392877 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:56:43.468351 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:56:43.457690   12848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:43.458459   12848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:43.460248   12848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:43.460927   12848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:43.462463   12848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:56:43.457690   12848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:43.458459   12848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:43.460248   12848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:43.460927   12848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:43.462463   12848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:56:43.468373 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:56:43.468384 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:56:46.000301 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:56:46.013622 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:56:46.013695 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:56:46.043040 1437114 cri.go:89] found id: ""
	I1209 05:56:46.043066 1437114 logs.go:282] 0 containers: []
	W1209 05:56:46.043074 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:56:46.043081 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:56:46.043164 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:56:46.073486 1437114 cri.go:89] found id: ""
	I1209 05:56:46.073512 1437114 logs.go:282] 0 containers: []
	W1209 05:56:46.073521 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:56:46.073529 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:56:46.073593 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:56:46.099148 1437114 cri.go:89] found id: ""
	I1209 05:56:46.099175 1437114 logs.go:282] 0 containers: []
	W1209 05:56:46.099185 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:56:46.099193 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:56:46.099252 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:56:46.123167 1437114 cri.go:89] found id: ""
	I1209 05:56:46.123191 1437114 logs.go:282] 0 containers: []
	W1209 05:56:46.123200 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:56:46.123207 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:56:46.123271 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:56:46.151973 1437114 cri.go:89] found id: ""
	I1209 05:56:46.151999 1437114 logs.go:282] 0 containers: []
	W1209 05:56:46.152008 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:56:46.152035 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:56:46.152098 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:56:46.177766 1437114 cri.go:89] found id: ""
	I1209 05:56:46.177798 1437114 logs.go:282] 0 containers: []
	W1209 05:56:46.177807 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:56:46.177813 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:56:46.177871 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:56:46.206986 1437114 cri.go:89] found id: ""
	I1209 05:56:46.207008 1437114 logs.go:282] 0 containers: []
	W1209 05:56:46.207017 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:56:46.207023 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:56:46.207081 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:56:46.233946 1437114 cri.go:89] found id: ""
	I1209 05:56:46.233968 1437114 logs.go:282] 0 containers: []
	W1209 05:56:46.233977 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:56:46.233986 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:56:46.233997 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:56:46.298127 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:56:46.289387   12939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:46.289949   12939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:46.291474   12939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:46.292041   12939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:46.293829   12939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:56:46.289387   12939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:46.289949   12939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:46.291474   12939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:46.292041   12939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:46.293829   12939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:56:46.298150 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:56:46.298162 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:56:46.323208 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:56:46.323239 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:56:46.355077 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:56:46.355106 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:56:46.410415 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:56:46.410452 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:56:48.926721 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:56:48.937257 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:56:48.937332 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:56:48.961648 1437114 cri.go:89] found id: ""
	I1209 05:56:48.961676 1437114 logs.go:282] 0 containers: []
	W1209 05:56:48.961685 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:56:48.961698 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:56:48.961758 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:56:48.989144 1437114 cri.go:89] found id: ""
	I1209 05:56:48.989169 1437114 logs.go:282] 0 containers: []
	W1209 05:56:48.989178 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:56:48.989184 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:56:48.989240 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:56:49.014588 1437114 cri.go:89] found id: ""
	I1209 05:56:49.014613 1437114 logs.go:282] 0 containers: []
	W1209 05:56:49.014622 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:56:49.014628 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:56:49.014691 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:56:49.038311 1437114 cri.go:89] found id: ""
	I1209 05:56:49.038339 1437114 logs.go:282] 0 containers: []
	W1209 05:56:49.038349 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:56:49.038355 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:56:49.038414 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:56:49.062714 1437114 cri.go:89] found id: ""
	I1209 05:56:49.062740 1437114 logs.go:282] 0 containers: []
	W1209 05:56:49.062748 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:56:49.062754 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:56:49.062814 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:56:49.089769 1437114 cri.go:89] found id: ""
	I1209 05:56:49.089798 1437114 logs.go:282] 0 containers: []
	W1209 05:56:49.089807 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:56:49.089815 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:56:49.089892 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:56:49.118456 1437114 cri.go:89] found id: ""
	I1209 05:56:49.118477 1437114 logs.go:282] 0 containers: []
	W1209 05:56:49.118486 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:56:49.118492 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:56:49.118548 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:56:49.146213 1437114 cri.go:89] found id: ""
	I1209 05:56:49.146241 1437114 logs.go:282] 0 containers: []
	W1209 05:56:49.146260 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:56:49.146286 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:56:49.146304 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:56:49.171755 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:56:49.171792 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:56:49.210632 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:56:49.210700 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:56:49.274853 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:56:49.274890 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:56:49.290746 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:56:49.290774 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:56:49.352595 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:56:49.344509   13068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:49.345192   13068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:49.346929   13068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:49.347389   13068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:49.348821   13068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:56:49.344509   13068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:49.345192   13068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:49.346929   13068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:49.347389   13068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:49.348821   13068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:56:51.854276 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:56:51.864787 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:56:51.864868 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:56:51.888399 1437114 cri.go:89] found id: ""
	I1209 05:56:51.888422 1437114 logs.go:282] 0 containers: []
	W1209 05:56:51.888431 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:56:51.888437 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:56:51.888499 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:56:51.913838 1437114 cri.go:89] found id: ""
	I1209 05:56:51.913865 1437114 logs.go:282] 0 containers: []
	W1209 05:56:51.913873 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:56:51.913880 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:56:51.913961 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:56:51.938727 1437114 cri.go:89] found id: ""
	I1209 05:56:51.938768 1437114 logs.go:282] 0 containers: []
	W1209 05:56:51.938794 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:56:51.938811 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:56:51.938885 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:56:51.964549 1437114 cri.go:89] found id: ""
	I1209 05:56:51.964576 1437114 logs.go:282] 0 containers: []
	W1209 05:56:51.964584 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:56:51.964590 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:56:51.964689 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:56:51.988777 1437114 cri.go:89] found id: ""
	I1209 05:56:51.988806 1437114 logs.go:282] 0 containers: []
	W1209 05:56:51.988815 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:56:51.988821 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:56:51.988908 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:56:52.017110 1437114 cri.go:89] found id: ""
	I1209 05:56:52.017138 1437114 logs.go:282] 0 containers: []
	W1209 05:56:52.017147 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:56:52.017154 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:56:52.017219 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:56:52.043184 1437114 cri.go:89] found id: ""
	I1209 05:56:52.043211 1437114 logs.go:282] 0 containers: []
	W1209 05:56:52.043219 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:56:52.043225 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:56:52.043293 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:56:52.068591 1437114 cri.go:89] found id: ""
	I1209 05:56:52.068617 1437114 logs.go:282] 0 containers: []
	W1209 05:56:52.068626 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:56:52.068636 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:56:52.068652 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:56:52.135805 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:56:52.127242   13157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:52.127996   13157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:52.129698   13157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:52.130086   13157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:52.131645   13157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:56:52.127242   13157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:52.127996   13157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:52.129698   13157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:52.130086   13157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:52.131645   13157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:56:52.135824 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:56:52.135837 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:56:52.160848 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:56:52.160884 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:56:52.206902 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:56:52.206930 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:56:52.269206 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:56:52.269242 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:56:54.786534 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:56:54.796870 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:56:54.796942 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:56:54.820891 1437114 cri.go:89] found id: ""
	I1209 05:56:54.820912 1437114 logs.go:282] 0 containers: []
	W1209 05:56:54.820920 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:56:54.820926 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:56:54.820983 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:56:54.844219 1437114 cri.go:89] found id: ""
	I1209 05:56:54.844243 1437114 logs.go:282] 0 containers: []
	W1209 05:56:54.844251 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:56:54.844257 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:56:54.844314 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:56:54.867467 1437114 cri.go:89] found id: ""
	I1209 05:56:54.867540 1437114 logs.go:282] 0 containers: []
	W1209 05:56:54.867564 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:56:54.867585 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:56:54.867678 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:56:54.891985 1437114 cri.go:89] found id: ""
	I1209 05:56:54.892007 1437114 logs.go:282] 0 containers: []
	W1209 05:56:54.892053 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:56:54.892060 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:56:54.892135 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:56:54.915079 1437114 cri.go:89] found id: ""
	I1209 05:56:54.915104 1437114 logs.go:282] 0 containers: []
	W1209 05:56:54.915112 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:56:54.915119 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:56:54.915175 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:56:54.941729 1437114 cri.go:89] found id: ""
	I1209 05:56:54.941768 1437114 logs.go:282] 0 containers: []
	W1209 05:56:54.941776 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:56:54.941783 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:56:54.941840 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:56:54.970033 1437114 cri.go:89] found id: ""
	I1209 05:56:54.970058 1437114 logs.go:282] 0 containers: []
	W1209 05:56:54.970066 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:56:54.970072 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:56:54.970134 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:56:55.004188 1437114 cri.go:89] found id: ""
	I1209 05:56:55.004230 1437114 logs.go:282] 0 containers: []
	W1209 05:56:55.004240 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:56:55.004250 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:56:55.004264 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:56:55.034996 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:56:55.035025 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:56:55.091574 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:56:55.091610 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:56:55.108302 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:56:55.108331 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:56:55.172944 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:56:55.163616   13287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:55.164399   13287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:55.166155   13287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:55.166466   13287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:55.168546   13287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:56:55.163616   13287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:55.164399   13287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:55.166155   13287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:55.166466   13287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:55.168546   13287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:56:55.172964 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:56:55.172985 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:56:57.700005 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:56:57.714279 1437114 out.go:203] 
	W1209 05:56:57.717113 1437114 out.go:285] X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1209 05:56:57.717154 1437114 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1209 05:56:57.717169 1437114 out.go:285] * Related issues:
	W1209 05:56:57.717186 1437114 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	W1209 05:56:57.717204 1437114 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	I1209 05:56:57.720208 1437114 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 09 05:50:54 newest-cni-262540 containerd[557]: time="2025-12-09T05:50:54.140457949Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 09 05:50:54 newest-cni-262540 containerd[557]: time="2025-12-09T05:50:54.140531030Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 09 05:50:54 newest-cni-262540 containerd[557]: time="2025-12-09T05:50:54.140633493Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 09 05:50:54 newest-cni-262540 containerd[557]: time="2025-12-09T05:50:54.140719603Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 09 05:50:54 newest-cni-262540 containerd[557]: time="2025-12-09T05:50:54.140780722Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 09 05:50:54 newest-cni-262540 containerd[557]: time="2025-12-09T05:50:54.140839280Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 09 05:50:54 newest-cni-262540 containerd[557]: time="2025-12-09T05:50:54.140899447Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 09 05:50:54 newest-cni-262540 containerd[557]: time="2025-12-09T05:50:54.140960081Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 09 05:50:54 newest-cni-262540 containerd[557]: time="2025-12-09T05:50:54.141027665Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 09 05:50:54 newest-cni-262540 containerd[557]: time="2025-12-09T05:50:54.141111133Z" level=info msg="Connect containerd service"
	Dec 09 05:50:54 newest-cni-262540 containerd[557]: time="2025-12-09T05:50:54.141485580Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 09 05:50:54 newest-cni-262540 containerd[557]: time="2025-12-09T05:50:54.142145599Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 09 05:50:54 newest-cni-262540 containerd[557]: time="2025-12-09T05:50:54.154449407Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 09 05:50:54 newest-cni-262540 containerd[557]: time="2025-12-09T05:50:54.154518566Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 09 05:50:54 newest-cni-262540 containerd[557]: time="2025-12-09T05:50:54.154573474Z" level=info msg="Start subscribing containerd event"
	Dec 09 05:50:54 newest-cni-262540 containerd[557]: time="2025-12-09T05:50:54.154621735Z" level=info msg="Start recovering state"
	Dec 09 05:50:54 newest-cni-262540 containerd[557]: time="2025-12-09T05:50:54.192831791Z" level=info msg="Start event monitor"
	Dec 09 05:50:54 newest-cni-262540 containerd[557]: time="2025-12-09T05:50:54.193022399Z" level=info msg="Start cni network conf syncer for default"
	Dec 09 05:50:54 newest-cni-262540 containerd[557]: time="2025-12-09T05:50:54.193095554Z" level=info msg="Start streaming server"
	Dec 09 05:50:54 newest-cni-262540 containerd[557]: time="2025-12-09T05:50:54.193158043Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 09 05:50:54 newest-cni-262540 containerd[557]: time="2025-12-09T05:50:54.193246959Z" level=info msg="runtime interface starting up..."
	Dec 09 05:50:54 newest-cni-262540 containerd[557]: time="2025-12-09T05:50:54.193315946Z" level=info msg="starting plugins..."
	Dec 09 05:50:54 newest-cni-262540 containerd[557]: time="2025-12-09T05:50:54.193399907Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 09 05:50:54 newest-cni-262540 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 09 05:50:54 newest-cni-262540 containerd[557]: time="2025-12-09T05:50:54.195297741Z" level=info msg="containerd successfully booted in 0.080443s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:57:07.053658   13780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:57:07.054261   13780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:57:07.055756   13780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:57:07.056307   13780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:57:07.057805   13780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +25.904254] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:14] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:16] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:18] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:19] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:30] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:32] overlayfs: idmapped layers are currently not supported
	[ +28.114653] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:33] overlayfs: idmapped layers are currently not supported
	[ +23.720849] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:34] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:35] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:36] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:37] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:38] overlayfs: idmapped layers are currently not supported
	[ +23.656275] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:39] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:57] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:58] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:00] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:02] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:03] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:05] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:06] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 9 05:31] overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	
	
	==> kernel <==
	 05:57:07 up  8:39,  0 user,  load average: 1.12, 0.76, 1.11
	Linux newest-cni-262540 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 09 05:57:02 newest-cni-262540 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 09 05:57:03 newest-cni-262540 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 491.
	Dec 09 05:57:03 newest-cni-262540 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 05:57:04 newest-cni-262540 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 05:57:04 newest-cni-262540 kubelet[13625]: E1209 05:57:04.651254   13625 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 09 05:57:04 newest-cni-262540 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 09 05:57:04 newest-cni-262540 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 09 05:57:05 newest-cni-262540 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
	Dec 09 05:57:05 newest-cni-262540 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 05:57:05 newest-cni-262540 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 05:57:05 newest-cni-262540 kubelet[13664]: E1209 05:57:05.482562   13664 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 09 05:57:05 newest-cni-262540 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 09 05:57:05 newest-cni-262540 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 09 05:57:06 newest-cni-262540 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2.
	Dec 09 05:57:06 newest-cni-262540 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 05:57:06 newest-cni-262540 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 05:57:06 newest-cni-262540 kubelet[13683]: E1209 05:57:06.267837   13683 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 09 05:57:06 newest-cni-262540 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 09 05:57:06 newest-cni-262540 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 09 05:57:06 newest-cni-262540 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3.
	Dec 09 05:57:06 newest-cni-262540 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 05:57:06 newest-cni-262540 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 05:57:06 newest-cni-262540 kubelet[13763]: E1209 05:57:06.997338   13763 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 09 05:57:07 newest-cni-262540 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 09 05:57:07 newest-cni-262540 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-262540 -n newest-cni-262540
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-262540 -n newest-cni-262540: exit status 2 (347.164197ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "newest-cni-262540" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-262540
helpers_test.go:243: (dbg) docker inspect newest-cni-262540:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ed3de5d59c962edb9b6bef3201cdaec7fe174aa520c616b7fefbc3014b60f5d7",
	        "Created": "2025-12-09T05:40:46.656747886Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1437242,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-09T05:50:48.635687357Z",
	            "FinishedAt": "2025-12-09T05:50:47.310180166Z"
	        },
	        "Image": "sha256:e4eb91ed18a24161fce60c7cdd660144ecd5b8c5029dc2dea2c5e423c2f48ce4",
	        "ResolvConfPath": "/var/lib/docker/containers/ed3de5d59c962edb9b6bef3201cdaec7fe174aa520c616b7fefbc3014b60f5d7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ed3de5d59c962edb9b6bef3201cdaec7fe174aa520c616b7fefbc3014b60f5d7/hostname",
	        "HostsPath": "/var/lib/docker/containers/ed3de5d59c962edb9b6bef3201cdaec7fe174aa520c616b7fefbc3014b60f5d7/hosts",
	        "LogPath": "/var/lib/docker/containers/ed3de5d59c962edb9b6bef3201cdaec7fe174aa520c616b7fefbc3014b60f5d7/ed3de5d59c962edb9b6bef3201cdaec7fe174aa520c616b7fefbc3014b60f5d7-json.log",
	        "Name": "/newest-cni-262540",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-262540:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-262540",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ed3de5d59c962edb9b6bef3201cdaec7fe174aa520c616b7fefbc3014b60f5d7",
	                "LowerDir": "/var/lib/docker/overlay2/6415c7c451ba9f1315edabee60b733d482ef79cadbd27911dea3809654b6c1ec-init/diff:/var/lib/docker/overlay2/c44bb57aa59cc265266f37f2bb6e7ec0e7d641c3b4aeaa57e6d23deec6f0d1d4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6415c7c451ba9f1315edabee60b733d482ef79cadbd27911dea3809654b6c1ec/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6415c7c451ba9f1315edabee60b733d482ef79cadbd27911dea3809654b6c1ec/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6415c7c451ba9f1315edabee60b733d482ef79cadbd27911dea3809654b6c1ec/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-262540",
	                "Source": "/var/lib/docker/volumes/newest-cni-262540/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-262540",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-262540",
	                "name.minikube.sigs.k8s.io": "newest-cni-262540",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5ef6b7780104cfde91a86dd0f42d780a7d42fd9d965a232761225f3bafa31a2e",
	            "SandboxKey": "/var/run/docker/netns/5ef6b7780104",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34215"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34216"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34219"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34217"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34218"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-262540": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "92:d2:57:f6:4e:32",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "aa89e26051ba524ceb1352e47e7602df84b3dfd74bbc435c72069a1036fceebf",
	                    "EndpointID": "79808c0b2bead60a0d6333b887aa13d7b302f422db688969b287245b73727791",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-262540",
	                        "ed3de5d59c96"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-262540 -n newest-cni-262540
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-262540 -n newest-cni-262540: exit status 2 (316.83588ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-262540 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-262540 logs -n 25: (1.551132815s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─
────────────────────┐
	│ COMMAND │                                                                                                                            ARGS                                                                                                                            │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─
────────────────────┤
	│ unpause │ -p embed-certs-432108 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-432108           │ jenkins │ v1.37.0 │ 09 Dec 25 05:37 UTC │ 09 Dec 25 05:37 UTC │
	│ delete  │ -p embed-certs-432108                                                                                                                                                                                                                                      │ embed-certs-432108           │ jenkins │ v1.37.0 │ 09 Dec 25 05:37 UTC │ 09 Dec 25 05:37 UTC │
	│ delete  │ -p embed-certs-432108                                                                                                                                                                                                                                      │ embed-certs-432108           │ jenkins │ v1.37.0 │ 09 Dec 25 05:37 UTC │ 09 Dec 25 05:37 UTC │
	│ start   │ -p default-k8s-diff-port-564611 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-564611 │ jenkins │ v1.37.0 │ 09 Dec 25 05:37 UTC │ 09 Dec 25 05:39 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-564611 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                         │ default-k8s-diff-port-564611 │ jenkins │ v1.37.0 │ 09 Dec 25 05:39 UTC │ 09 Dec 25 05:39 UTC │
	│ stop    │ -p default-k8s-diff-port-564611 --alsologtostderr -v=3                                                                                                                                                                                                     │ default-k8s-diff-port-564611 │ jenkins │ v1.37.0 │ 09 Dec 25 05:39 UTC │ 09 Dec 25 05:39 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-564611 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ default-k8s-diff-port-564611 │ jenkins │ v1.37.0 │ 09 Dec 25 05:39 UTC │ 09 Dec 25 05:39 UTC │
	│ start   │ -p default-k8s-diff-port-564611 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-564611 │ jenkins │ v1.37.0 │ 09 Dec 25 05:39 UTC │ 09 Dec 25 05:40 UTC │
	│ image   │ default-k8s-diff-port-564611 image list --format=json                                                                                                                                                                                                      │ default-k8s-diff-port-564611 │ jenkins │ v1.37.0 │ 09 Dec 25 05:40 UTC │ 09 Dec 25 05:40 UTC │
	│ pause   │ -p default-k8s-diff-port-564611 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-564611 │ jenkins │ v1.37.0 │ 09 Dec 25 05:40 UTC │ 09 Dec 25 05:40 UTC │
	│ unpause │ -p default-k8s-diff-port-564611 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-564611 │ jenkins │ v1.37.0 │ 09 Dec 25 05:40 UTC │ 09 Dec 25 05:40 UTC │
	│ delete  │ -p default-k8s-diff-port-564611                                                                                                                                                                                                                            │ default-k8s-diff-port-564611 │ jenkins │ v1.37.0 │ 09 Dec 25 05:40 UTC │ 09 Dec 25 05:40 UTC │
	│ delete  │ -p default-k8s-diff-port-564611                                                                                                                                                                                                                            │ default-k8s-diff-port-564611 │ jenkins │ v1.37.0 │ 09 Dec 25 05:40 UTC │ 09 Dec 25 05:40 UTC │
	│ start   │ -p newest-cni-262540 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-262540            │ jenkins │ v1.37.0 │ 09 Dec 25 05:40 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-842269 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ no-preload-842269            │ jenkins │ v1.37.0 │ 09 Dec 25 05:43 UTC │                     │
	│ stop    │ -p no-preload-842269 --alsologtostderr -v=3                                                                                                                                                                                                                │ no-preload-842269            │ jenkins │ v1.37.0 │ 09 Dec 25 05:45 UTC │ 09 Dec 25 05:45 UTC │
	│ addons  │ enable dashboard -p no-preload-842269 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                               │ no-preload-842269            │ jenkins │ v1.37.0 │ 09 Dec 25 05:45 UTC │ 09 Dec 25 05:45 UTC │
	│ start   │ -p no-preload-842269 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-842269            │ jenkins │ v1.37.0 │ 09 Dec 25 05:45 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-262540 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ newest-cni-262540            │ jenkins │ v1.37.0 │ 09 Dec 25 05:49 UTC │                     │
	│ stop    │ -p newest-cni-262540 --alsologtostderr -v=3                                                                                                                                                                                                                │ newest-cni-262540            │ jenkins │ v1.37.0 │ 09 Dec 25 05:50 UTC │ 09 Dec 25 05:50 UTC │
	│ addons  │ enable dashboard -p newest-cni-262540 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                               │ newest-cni-262540            │ jenkins │ v1.37.0 │ 09 Dec 25 05:50 UTC │ 09 Dec 25 05:50 UTC │
	│ start   │ -p newest-cni-262540 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-262540            │ jenkins │ v1.37.0 │ 09 Dec 25 05:50 UTC │                     │
	│ image   │ newest-cni-262540 image list --format=json                                                                                                                                                                                                                 │ newest-cni-262540            │ jenkins │ v1.37.0 │ 09 Dec 25 05:57 UTC │ 09 Dec 25 05:57 UTC │
	│ pause   │ -p newest-cni-262540 --alsologtostderr -v=1                                                                                                                                                                                                                │ newest-cni-262540            │ jenkins │ v1.37.0 │ 09 Dec 25 05:57 UTC │ 09 Dec 25 05:57 UTC │
	│ unpause │ -p newest-cni-262540 --alsologtostderr -v=1                                                                                                                                                                                                                │ newest-cni-262540            │ jenkins │ v1.37.0 │ 09 Dec 25 05:57 UTC │ 09 Dec 25 05:57 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─
────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/09 05:50:48
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 05:50:48.368732 1437114 out.go:360] Setting OutFile to fd 1 ...
	I1209 05:50:48.368913 1437114 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 05:50:48.368940 1437114 out.go:374] Setting ErrFile to fd 2...
	I1209 05:50:48.368958 1437114 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 05:50:48.369216 1437114 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-1142328/.minikube/bin
	I1209 05:50:48.369601 1437114 out.go:368] Setting JSON to false
	I1209 05:50:48.370536 1437114 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":30772,"bootTime":1765228677,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1209 05:50:48.370622 1437114 start.go:143] virtualization:  
	I1209 05:50:48.373806 1437114 out.go:179] * [newest-cni-262540] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1209 05:50:48.377517 1437114 out.go:179]   - MINIKUBE_LOCATION=22081
	I1209 05:50:48.377579 1437114 notify.go:221] Checking for updates...
	I1209 05:50:48.383314 1437114 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 05:50:48.386284 1437114 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22081-1142328/kubeconfig
	I1209 05:50:48.389132 1437114 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-1142328/.minikube
	I1209 05:50:48.392076 1437114 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1209 05:50:48.394975 1437114 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 05:50:48.398361 1437114 config.go:182] Loaded profile config "newest-cni-262540": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1209 05:50:48.398977 1437114 driver.go:422] Setting default libvirt URI to qemu:///system
	I1209 05:50:48.429565 1437114 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1209 05:50:48.429674 1437114 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 05:50:48.493190 1437114 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-09 05:50:48.483865172 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1209 05:50:48.493298 1437114 docker.go:319] overlay module found
	I1209 05:50:48.496461 1437114 out.go:179] * Using the docker driver based on existing profile
	I1209 05:50:48.499256 1437114 start.go:309] selected driver: docker
	I1209 05:50:48.499276 1437114 start.go:927] validating driver "docker" against &{Name:newest-cni-262540 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-262540 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString
: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 05:50:48.499393 1437114 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 05:50:48.500188 1437114 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 05:50:48.552839 1437114 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-09 05:50:48.544121972 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1209 05:50:48.553181 1437114 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1209 05:50:48.553214 1437114 cni.go:84] Creating CNI manager for ""
	I1209 05:50:48.553271 1437114 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1209 05:50:48.553312 1437114 start.go:353] cluster config:
	{Name:newest-cni-262540 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-262540 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 05:50:48.558270 1437114 out.go:179] * Starting "newest-cni-262540" primary control-plane node in "newest-cni-262540" cluster
	I1209 05:50:48.560987 1437114 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1209 05:50:48.563913 1437114 out.go:179] * Pulling base image v0.0.48-1765184860-22066 ...
	I1209 05:50:48.566628 1437114 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1209 05:50:48.566677 1437114 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1209 05:50:48.566701 1437114 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local docker daemon
	I1209 05:50:48.566709 1437114 cache.go:65] Caching tarball of preloaded images
	I1209 05:50:48.566793 1437114 preload.go:238] Found /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1209 05:50:48.566803 1437114 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1209 05:50:48.566914 1437114 profile.go:143] Saving config to /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/config.json ...
	I1209 05:50:48.585366 1437114 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local docker daemon, skipping pull
	I1209 05:50:48.585390 1437114 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c exists in daemon, skipping load
	I1209 05:50:48.585410 1437114 cache.go:243] Successfully downloaded all kic artifacts
	I1209 05:50:48.585447 1437114 start.go:360] acquireMachinesLock for newest-cni-262540: {Name:mk272d84ff1bc8c8949f2f0b1f608a7519899d10 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 05:50:48.585504 1437114 start.go:364] duration metric: took 35.806µs to acquireMachinesLock for "newest-cni-262540"
	I1209 05:50:48.585529 1437114 start.go:96] Skipping create...Using existing machine configuration
	I1209 05:50:48.585539 1437114 fix.go:54] fixHost starting: 
	I1209 05:50:48.585799 1437114 cli_runner.go:164] Run: docker container inspect newest-cni-262540 --format={{.State.Status}}
	I1209 05:50:48.601614 1437114 fix.go:112] recreateIfNeeded on newest-cni-262540: state=Stopped err=<nil>
	W1209 05:50:48.601645 1437114 fix.go:138] unexpected machine state, will restart: <nil>
	W1209 05:50:45.187180 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:50:47.684513 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	I1209 05:50:48.604910 1437114 out.go:252] * Restarting existing docker container for "newest-cni-262540" ...
	I1209 05:50:48.604997 1437114 cli_runner.go:164] Run: docker start newest-cni-262540
	I1209 05:50:48.871934 1437114 cli_runner.go:164] Run: docker container inspect newest-cni-262540 --format={{.State.Status}}
	I1209 05:50:48.896820 1437114 kic.go:430] container "newest-cni-262540" state is running.
	I1209 05:50:48.898586 1437114 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-262540
	I1209 05:50:48.919622 1437114 profile.go:143] Saving config to /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/config.json ...
	I1209 05:50:48.919952 1437114 machine.go:94] provisionDockerMachine start ...
	I1209 05:50:48.920090 1437114 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-262540
	I1209 05:50:48.944382 1437114 main.go:143] libmachine: Using SSH client type: native
	I1209 05:50:48.944721 1437114 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 34215 <nil> <nil>}
	I1209 05:50:48.944730 1437114 main.go:143] libmachine: About to run SSH command:
	hostname
	I1209 05:50:48.945423 1437114 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:54144->127.0.0.1:34215: read: connection reset by peer
	I1209 05:50:52.103931 1437114 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-262540
	
	I1209 05:50:52.103958 1437114 ubuntu.go:182] provisioning hostname "newest-cni-262540"
	I1209 05:50:52.104072 1437114 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-262540
	I1209 05:50:52.121462 1437114 main.go:143] libmachine: Using SSH client type: native
	I1209 05:50:52.121778 1437114 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 34215 <nil> <nil>}
	I1209 05:50:52.121795 1437114 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-262540 && echo "newest-cni-262540" | sudo tee /etc/hostname
	I1209 05:50:52.280621 1437114 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-262540
	
	I1209 05:50:52.280705 1437114 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-262540
	I1209 05:50:52.301681 1437114 main.go:143] libmachine: Using SSH client type: native
	I1209 05:50:52.301997 1437114 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 34215 <nil> <nil>}
	I1209 05:50:52.302019 1437114 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-262540' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-262540/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-262540' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 05:50:52.452274 1437114 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1209 05:50:52.452304 1437114 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22081-1142328/.minikube CaCertPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22081-1142328/.minikube}
	I1209 05:50:52.452324 1437114 ubuntu.go:190] setting up certificates
	I1209 05:50:52.452332 1437114 provision.go:84] configureAuth start
	I1209 05:50:52.452391 1437114 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-262540
	I1209 05:50:52.475825 1437114 provision.go:143] copyHostCerts
	I1209 05:50:52.475907 1437114 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.pem, removing ...
	I1209 05:50:52.475921 1437114 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.pem
	I1209 05:50:52.475999 1437114 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.pem (1078 bytes)
	I1209 05:50:52.476136 1437114 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-1142328/.minikube/cert.pem, removing ...
	I1209 05:50:52.476147 1437114 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-1142328/.minikube/cert.pem
	I1209 05:50:52.476175 1437114 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22081-1142328/.minikube/cert.pem (1123 bytes)
	I1209 05:50:52.476288 1437114 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-1142328/.minikube/key.pem, removing ...
	I1209 05:50:52.476322 1437114 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-1142328/.minikube/key.pem
	I1209 05:50:52.476364 1437114 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22081-1142328/.minikube/key.pem (1675 bytes)
	I1209 05:50:52.476440 1437114 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca-key.pem org=jenkins.newest-cni-262540 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-262540]
	I1209 05:50:52.561012 1437114 provision.go:177] copyRemoteCerts
	I1209 05:50:52.561084 1437114 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 05:50:52.561133 1437114 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-262540
	I1209 05:50:52.578674 1437114 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34215 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/newest-cni-262540/id_rsa Username:docker}
	I1209 05:50:52.685758 1437114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1209 05:50:52.702408 1437114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1209 05:50:52.719173 1437114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1671 bytes)
	I1209 05:50:52.736435 1437114 provision.go:87] duration metric: took 284.081054ms to configureAuth
	I1209 05:50:52.736462 1437114 ubuntu.go:206] setting minikube options for container-runtime
	I1209 05:50:52.736672 1437114 config.go:182] Loaded profile config "newest-cni-262540": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1209 05:50:52.736698 1437114 machine.go:97] duration metric: took 3.816733312s to provisionDockerMachine
	I1209 05:50:52.736707 1437114 start.go:293] postStartSetup for "newest-cni-262540" (driver="docker")
	I1209 05:50:52.736719 1437114 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 05:50:52.736771 1437114 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 05:50:52.736819 1437114 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-262540
	I1209 05:50:52.753733 1437114 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34215 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/newest-cni-262540/id_rsa Username:docker}
	I1209 05:50:52.859644 1437114 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 05:50:52.862806 1437114 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1209 05:50:52.862830 1437114 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1209 05:50:52.862841 1437114 filesync.go:126] Scanning /home/jenkins/minikube-integration/22081-1142328/.minikube/addons for local assets ...
	I1209 05:50:52.862893 1437114 filesync.go:126] Scanning /home/jenkins/minikube-integration/22081-1142328/.minikube/files for local assets ...
	I1209 05:50:52.862974 1437114 filesync.go:149] local asset: /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/ssl/certs/11442312.pem -> 11442312.pem in /etc/ssl/certs
	I1209 05:50:52.863076 1437114 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 05:50:52.870063 1437114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/ssl/certs/11442312.pem --> /etc/ssl/certs/11442312.pem (1708 bytes)
	I1209 05:50:52.886852 1437114 start.go:296] duration metric: took 150.129481ms for postStartSetup
	I1209 05:50:52.886932 1437114 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1209 05:50:52.887020 1437114 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-262540
	I1209 05:50:52.904086 1437114 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34215 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/newest-cni-262540/id_rsa Username:docker}
	I1209 05:50:53.006063 1437114 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1209 05:50:53.011716 1437114 fix.go:56] duration metric: took 4.426170276s for fixHost
	I1209 05:50:53.011745 1437114 start.go:83] releasing machines lock for "newest-cni-262540", held for 4.426228294s
	I1209 05:50:53.011812 1437114 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-262540
	I1209 05:50:53.028468 1437114 ssh_runner.go:195] Run: cat /version.json
	I1209 05:50:53.028532 1437114 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-262540
	I1209 05:50:53.028815 1437114 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 05:50:53.028886 1437114 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-262540
	I1209 05:50:53.050698 1437114 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34215 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/newest-cni-262540/id_rsa Username:docker}
	I1209 05:50:53.061651 1437114 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34215 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/newest-cni-262540/id_rsa Username:docker}
	I1209 05:50:53.151708 1437114 ssh_runner.go:195] Run: systemctl --version
	I1209 05:50:53.249572 1437114 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 05:50:53.254184 1437114 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 05:50:53.254256 1437114 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 05:50:53.261725 1437114 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1209 05:50:53.261749 1437114 start.go:496] detecting cgroup driver to use...
	I1209 05:50:53.261780 1437114 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1209 05:50:53.261828 1437114 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1209 05:50:53.278531 1437114 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1209 05:50:53.291190 1437114 docker.go:218] disabling cri-docker service (if available) ...
	I1209 05:50:53.291252 1437114 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 05:50:53.306525 1437114 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 05:50:53.319477 1437114 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 05:50:53.424347 1437114 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 05:50:53.539911 1437114 docker.go:234] disabling docker service ...
	I1209 05:50:53.540005 1437114 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 05:50:53.555506 1437114 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 05:50:53.568379 1437114 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 05:50:53.684143 1437114 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 05:50:53.819865 1437114 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 05:50:53.834400 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 05:50:53.848555 1437114 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1209 05:50:53.857346 1437114 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1209 05:50:53.866232 1437114 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1209 05:50:53.866362 1437114 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1209 05:50:53.875141 1437114 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1209 05:50:53.883775 1437114 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1209 05:50:53.892743 1437114 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1209 05:50:53.901606 1437114 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 05:50:53.909694 1437114 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1209 05:50:53.918469 1437114 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1209 05:50:53.927272 1437114 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1209 05:50:53.939275 1437114 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 05:50:53.948029 1437114 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 05:50:53.956257 1437114 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 05:50:54.075166 1437114 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1209 05:50:54.195479 1437114 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1209 05:50:54.195546 1437114 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1209 05:50:54.199412 1437114 start.go:564] Will wait 60s for crictl version
	I1209 05:50:54.199478 1437114 ssh_runner.go:195] Run: which crictl
	I1209 05:50:54.203349 1437114 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1209 05:50:54.229036 1437114 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1209 05:50:54.229147 1437114 ssh_runner.go:195] Run: containerd --version
	I1209 05:50:54.257755 1437114 ssh_runner.go:195] Run: containerd --version
	I1209 05:50:54.281890 1437114 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	W1209 05:50:50.184270 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:50:52.684275 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	I1209 05:50:54.284780 1437114 cli_runner.go:164] Run: docker network inspect newest-cni-262540 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1209 05:50:54.300458 1437114 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1209 05:50:54.304227 1437114 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 05:50:54.316829 1437114 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1209 05:50:54.319602 1437114 kubeadm.go:884] updating cluster {Name:newest-cni-262540 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-262540 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 05:50:54.319761 1437114 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1209 05:50:54.319850 1437114 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 05:50:54.344882 1437114 containerd.go:627] all images are preloaded for containerd runtime.
	I1209 05:50:54.344907 1437114 containerd.go:534] Images already preloaded, skipping extraction
	I1209 05:50:54.344969 1437114 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 05:50:54.368351 1437114 containerd.go:627] all images are preloaded for containerd runtime.
	I1209 05:50:54.368375 1437114 cache_images.go:86] Images are preloaded, skipping loading
	I1209 05:50:54.368384 1437114 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1209 05:50:54.368487 1437114 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-262540 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-262540 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 05:50:54.368554 1437114 ssh_runner.go:195] Run: sudo crictl info
	I1209 05:50:54.396480 1437114 cni.go:84] Creating CNI manager for ""
	I1209 05:50:54.396505 1437114 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1209 05:50:54.396527 1437114 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1209 05:50:54.396551 1437114 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-262540 NodeName:newest-cni-262540 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1209 05:50:54.396668 1437114 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-262540"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 05:50:54.396755 1437114 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1209 05:50:54.404357 1437114 binaries.go:51] Found k8s binaries, skipping transfer
	I1209 05:50:54.404462 1437114 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1209 05:50:54.411829 1437114 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1209 05:50:54.423915 1437114 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1209 05:50:54.436484 1437114 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1209 05:50:54.448905 1437114 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1209 05:50:54.452398 1437114 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 05:50:54.461840 1437114 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 05:50:54.574379 1437114 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 05:50:54.590263 1437114 certs.go:69] Setting up /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540 for IP: 192.168.76.2
	I1209 05:50:54.590332 1437114 certs.go:195] generating shared ca certs ...
	I1209 05:50:54.590364 1437114 certs.go:227] acquiring lock for ca certs: {Name:mk15788702f8c4e23b5aeab3f44961d296fab259 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 05:50:54.590561 1437114 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.key
	I1209 05:50:54.590652 1437114 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/proxy-client-ca.key
	I1209 05:50:54.590688 1437114 certs.go:257] generating profile certs ...
	I1209 05:50:54.590838 1437114 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/client.key
	I1209 05:50:54.590942 1437114 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/apiserver.key.0ed49b31
	I1209 05:50:54.591051 1437114 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/proxy-client.key
	I1209 05:50:54.591210 1437114 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/1144231.pem (1338 bytes)
	W1209 05:50:54.591287 1437114 certs.go:480] ignoring /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/1144231_empty.pem, impossibly tiny 0 bytes
	I1209 05:50:54.591314 1437114 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca-key.pem (1679 bytes)
	I1209 05:50:54.591380 1437114 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem (1078 bytes)
	I1209 05:50:54.591442 1437114 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/cert.pem (1123 bytes)
	I1209 05:50:54.591490 1437114 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/key.pem (1675 bytes)
	I1209 05:50:54.591576 1437114 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/ssl/certs/11442312.pem (1708 bytes)
	I1209 05:50:54.592436 1437114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 05:50:54.617399 1437114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1209 05:50:54.636943 1437114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 05:50:54.658494 1437114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1209 05:50:54.674958 1437114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1209 05:50:54.701134 1437114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1209 05:50:54.720347 1437114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 05:50:54.738904 1437114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/newest-cni-262540/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1209 05:50:54.758253 1437114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/1144231.pem --> /usr/share/ca-certificates/1144231.pem (1338 bytes)
	I1209 05:50:54.775204 1437114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/ssl/certs/11442312.pem --> /usr/share/ca-certificates/11442312.pem (1708 bytes)
	I1209 05:50:54.791963 1437114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 05:50:54.809403 1437114 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 05:50:54.821958 1437114 ssh_runner.go:195] Run: openssl version
	I1209 05:50:54.828113 1437114 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1209 05:50:54.835305 1437114 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1209 05:50:54.842458 1437114 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 05:50:54.846155 1437114 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 04:09 /usr/share/ca-certificates/minikubeCA.pem
	I1209 05:50:54.846222 1437114 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 05:50:54.887330 1437114 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1209 05:50:54.894630 1437114 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1144231.pem
	I1209 05:50:54.901722 1437114 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1144231.pem /etc/ssl/certs/1144231.pem
	I1209 05:50:54.909025 1437114 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1144231.pem
	I1209 05:50:54.912514 1437114 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 04:18 /usr/share/ca-certificates/1144231.pem
	I1209 05:50:54.912621 1437114 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1144231.pem
	I1209 05:50:54.953649 1437114 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1209 05:50:54.960781 1437114 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/11442312.pem
	I1209 05:50:54.967822 1437114 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/11442312.pem /etc/ssl/certs/11442312.pem
	I1209 05:50:54.975177 1437114 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11442312.pem
	I1209 05:50:54.978699 1437114 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 04:18 /usr/share/ca-certificates/11442312.pem
	I1209 05:50:54.978782 1437114 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11442312.pem
	I1209 05:50:55.020640 1437114 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1209 05:50:55.034989 1437114 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 05:50:55.043885 1437114 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1209 05:50:55.090059 1437114 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1209 05:50:55.134954 1437114 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1209 05:50:55.180095 1437114 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1209 05:50:55.223090 1437114 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1209 05:50:55.265103 1437114 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1209 05:50:55.306238 1437114 kubeadm.go:401] StartCluster: {Name:newest-cni-262540 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-262540 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 05:50:55.306348 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1209 05:50:55.306413 1437114 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 05:50:55.335032 1437114 cri.go:89] found id: ""
	I1209 05:50:55.335115 1437114 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1209 05:50:55.355619 1437114 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1209 05:50:55.355640 1437114 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1209 05:50:55.355691 1437114 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1209 05:50:55.363844 1437114 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1209 05:50:55.364433 1437114 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-262540" does not appear in /home/jenkins/minikube-integration/22081-1142328/kubeconfig
	I1209 05:50:55.364754 1437114 kubeconfig.go:62] /home/jenkins/minikube-integration/22081-1142328/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-262540" cluster setting kubeconfig missing "newest-cni-262540" context setting]
	I1209 05:50:55.365251 1437114 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1142328/kubeconfig: {Name:mk0dc127429f88dc4fdfb2d110deebc58207c1b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 05:50:55.366765 1437114 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1209 05:50:55.375221 1437114 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1209 05:50:55.375252 1437114 kubeadm.go:602] duration metric: took 19.605753ms to restartPrimaryControlPlane
	I1209 05:50:55.375261 1437114 kubeadm.go:403] duration metric: took 69.033781ms to StartCluster
	I1209 05:50:55.375276 1437114 settings.go:142] acquiring lock: {Name:mk8fa744e3d74bf8a1cbf5ac275c9f1969ad91a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 05:50:55.375345 1437114 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22081-1142328/kubeconfig
	I1209 05:50:55.376265 1437114 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1142328/kubeconfig: {Name:mk0dc127429f88dc4fdfb2d110deebc58207c1b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 05:50:55.376705 1437114 config.go:182] Loaded profile config "newest-cni-262540": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1209 05:50:55.376504 1437114 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1209 05:50:55.376810 1437114 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1209 05:50:55.377093 1437114 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-262540"
	I1209 05:50:55.377111 1437114 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-262540"
	I1209 05:50:55.377136 1437114 host.go:66] Checking if "newest-cni-262540" exists ...
	I1209 05:50:55.377594 1437114 cli_runner.go:164] Run: docker container inspect newest-cni-262540 --format={{.State.Status}}
	I1209 05:50:55.377785 1437114 addons.go:70] Setting dashboard=true in profile "newest-cni-262540"
	I1209 05:50:55.377813 1437114 addons.go:239] Setting addon dashboard=true in "newest-cni-262540"
	W1209 05:50:55.377825 1437114 addons.go:248] addon dashboard should already be in state true
	I1209 05:50:55.377849 1437114 host.go:66] Checking if "newest-cni-262540" exists ...
	I1209 05:50:55.378304 1437114 cli_runner.go:164] Run: docker container inspect newest-cni-262540 --format={{.State.Status}}
	I1209 05:50:55.378820 1437114 addons.go:70] Setting default-storageclass=true in profile "newest-cni-262540"
	I1209 05:50:55.378864 1437114 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-262540"
	I1209 05:50:55.379212 1437114 cli_runner.go:164] Run: docker container inspect newest-cni-262540 --format={{.State.Status}}
	I1209 05:50:55.381896 1437114 out.go:179] * Verifying Kubernetes components...
	I1209 05:50:55.388614 1437114 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 05:50:55.438264 1437114 addons.go:239] Setting addon default-storageclass=true in "newest-cni-262540"
	I1209 05:50:55.438303 1437114 host.go:66] Checking if "newest-cni-262540" exists ...
	I1209 05:50:55.438728 1437114 cli_runner.go:164] Run: docker container inspect newest-cni-262540 --format={{.State.Status}}
	I1209 05:50:55.440785 1437114 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 05:50:55.442715 1437114 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 05:50:55.442743 1437114 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1209 05:50:55.442806 1437114 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-262540
	I1209 05:50:55.442947 1437114 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1209 05:50:55.445621 1437114 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1209 05:50:55.449877 1437114 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1209 05:50:55.449904 1437114 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1209 05:50:55.449976 1437114 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-262540
	I1209 05:50:55.481759 1437114 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34215 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/newest-cni-262540/id_rsa Username:docker}
	I1209 05:50:55.496417 1437114 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1209 05:50:55.496440 1437114 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1209 05:50:55.496499 1437114 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-262540
	I1209 05:50:55.515362 1437114 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34215 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/newest-cni-262540/id_rsa Username:docker}
	I1209 05:50:55.537402 1437114 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34215 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/newest-cni-262540/id_rsa Username:docker}
	I1209 05:50:55.642792 1437114 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 05:50:55.677774 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 05:50:55.711653 1437114 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1209 05:50:55.711691 1437114 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1209 05:50:55.713691 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1209 05:50:55.771340 1437114 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1209 05:50:55.771368 1437114 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1209 05:50:55.785331 1437114 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1209 05:50:55.785403 1437114 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1209 05:50:55.798961 1437114 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1209 05:50:55.798984 1437114 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1209 05:50:55.811558 1437114 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1209 05:50:55.811625 1437114 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1209 05:50:55.824010 1437114 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1209 05:50:55.824113 1437114 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1209 05:50:55.836722 1437114 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1209 05:50:55.836745 1437114 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1209 05:50:55.849061 1437114 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1209 05:50:55.849126 1437114 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1209 05:50:55.862091 1437114 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1209 05:50:55.862114 1437114 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1209 05:50:55.875010 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1209 05:50:56.435552 1437114 api_server.go:52] waiting for apiserver process to appear ...
	W1209 05:50:56.435748 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:50:56.435801 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:50:56.435838 1437114 retry.go:31] will retry after 228.095144ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 05:50:56.435700 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:50:56.435898 1437114 retry.go:31] will retry after 361.053359ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 05:50:56.436142 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:50:56.436189 1437114 retry.go:31] will retry after 212.683869ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:50:56.649580 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1209 05:50:56.665010 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1209 05:50:56.729564 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:50:56.729662 1437114 retry.go:31] will retry after 263.201205ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 05:50:56.751560 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:50:56.751590 1437114 retry.go:31] will retry after 282.08987ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:50:56.797828 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1209 05:50:56.855489 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:50:56.855525 1437114 retry.go:31] will retry after 519.882573ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:50:56.936655 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:50:56.993111 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1209 05:50:57.034512 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1209 05:50:57.059780 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:50:57.059861 1437114 retry.go:31] will retry after 724.517068ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 05:50:57.095702 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:50:57.095733 1437114 retry.go:31] will retry after 773.591416ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:50:57.376312 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1209 05:50:57.435557 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:50:57.435589 1437114 retry.go:31] will retry after 453.196958ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:50:57.436773 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:50:57.784620 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1209 05:50:57.844755 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:50:57.844791 1437114 retry.go:31] will retry after 1.262011023s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:50:57.869923 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 05:50:57.889536 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 05:50:57.936212 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1209 05:50:57.961431 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:50:57.961468 1437114 retry.go:31] will retry after 546.501311ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 05:50:58.032466 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:50:58.032501 1437114 retry.go:31] will retry after 1.229436669s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 05:50:54.684397 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:50:57.184110 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:50:59.184561 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	I1209 05:50:58.436310 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:50:58.508935 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1209 05:50:58.565163 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:50:58.565196 1437114 retry.go:31] will retry after 1.407912766s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:50:58.936676 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:50:59.107417 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1209 05:50:59.166291 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:50:59.166364 1437114 retry.go:31] will retry after 928.374807ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:50:59.262572 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1209 05:50:59.321942 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:50:59.321975 1437114 retry.go:31] will retry after 837.961471ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:50:59.436172 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:50:59.936839 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:50:59.973278 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 05:51:00.094961 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1209 05:51:00.122388 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:00.122508 1437114 retry.go:31] will retry after 2.37581771s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:00.163516 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1209 05:51:00.369038 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:00.369122 1437114 retry.go:31] will retry after 1.02409357s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 05:51:00.430845 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:00.430881 1437114 retry.go:31] will retry after 1.008529781s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:00.435975 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:00.935928 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:01.393811 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1209 05:51:01.436520 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:01.440060 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1209 05:51:01.479948 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:01.480008 1437114 retry.go:31] will retry after 3.887040249s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 05:51:01.521362 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:01.521394 1437114 retry.go:31] will retry after 2.488257731s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:01.936891 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:02.436059 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:02.499505 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1209 05:51:02.558807 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:02.558839 1437114 retry.go:31] will retry after 1.68559081s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:02.936227 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1209 05:51:01.683581 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:51:04.183570 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	I1209 05:51:03.436252 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:03.936492 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:04.009914 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1209 05:51:04.068567 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:04.068604 1437114 retry.go:31] will retry after 3.558332748s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:04.244680 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1209 05:51:04.309239 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:04.309330 1437114 retry.go:31] will retry after 5.213787505s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:04.436559 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:04.936651 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:05.367810 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1209 05:51:05.433548 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:05.433586 1437114 retry.go:31] will retry after 5.477878375s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:05.436872 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:05.936073 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:06.436593 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:06.936543 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:07.436871 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:07.628150 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1209 05:51:07.690629 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:07.690661 1437114 retry.go:31] will retry after 6.157660473s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:07.935908 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1209 05:51:06.183630 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:51:08.683544 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	I1209 05:51:08.436122 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:08.935959 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:09.436970 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:09.523671 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1209 05:51:09.581839 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:09.581914 1437114 retry.go:31] will retry after 9.601279523s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:09.936233 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:10.436178 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:10.911744 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1209 05:51:10.936618 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1209 05:51:11.040149 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:11.040187 1437114 retry.go:31] will retry after 9.211684326s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:11.436896 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:11.936862 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:12.435946 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:12.936781 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1209 05:51:10.683655 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:51:12.684274 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	I1209 05:51:13.436827 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:13.848647 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1209 05:51:13.909374 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:13.909406 1437114 retry.go:31] will retry after 5.044533036s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:13.936521 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:14.436557 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:14.935977 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:15.436310 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:15.936335 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:16.436628 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:16.936535 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:17.436311 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:17.935962 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1209 05:51:15.183508 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:51:17.183575 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:51:19.184498 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	I1209 05:51:18.435898 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:18.936142 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:18.955073 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1209 05:51:19.020072 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:19.020104 1437114 retry.go:31] will retry after 11.951102235s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:19.184688 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1209 05:51:19.284505 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:19.284538 1437114 retry.go:31] will retry after 12.030085055s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:19.435928 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:19.936763 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:20.252740 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1209 05:51:20.316752 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:20.316784 1437114 retry.go:31] will retry after 7.019613017s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:20.436227 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:20.936875 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:21.435907 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:21.935963 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:22.436158 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:22.936474 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1209 05:51:21.683564 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:51:23.683626 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	W1209 05:51:26.184579 1429857 node_ready.go:55] error getting node "no-preload-842269" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-842269": dial tcp 192.168.85.2:8443: connect: connection refused
	I1209 05:51:27.683214 1429857 node_ready.go:38] duration metric: took 6m0.000146062s for node "no-preload-842269" to be "Ready" ...
	I1209 05:51:27.686512 1429857 out.go:203] 
	W1209 05:51:27.689522 1429857 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1209 05:51:27.689540 1429857 out.go:285] * 
	W1209 05:51:27.691657 1429857 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 05:51:27.694499 1429857 out.go:203] 
	I1209 05:51:23.436353 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:23.936003 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:24.435917 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:24.936039 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:25.435883 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:25.936680 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:26.436359 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:26.936582 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:27.336866 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1209 05:51:27.401213 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:27.401248 1437114 retry.go:31] will retry after 15.185111317s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:27.436540 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:27.936409 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:28.436146 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:28.936943 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:29.435893 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:29.936169 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:30.435922 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:30.936805 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:30.972257 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1209 05:51:31.030985 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:31.031019 1437114 retry.go:31] will retry after 20.454574576s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:31.315422 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1209 05:51:31.375282 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:31.375315 1437114 retry.go:31] will retry after 20.731698158s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:31.436402 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:31.936683 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:32.436139 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:32.936168 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:33.436458 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:33.936647 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:34.435986 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:34.935949 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:35.436254 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:35.936501 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:36.436171 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:36.936413 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:37.436503 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:37.936112 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:38.436260 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:38.936155 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:39.435919 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:39.935963 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:40.435931 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:40.936251 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:41.435937 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:41.936193 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:42.436356 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:42.587277 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1209 05:51:42.649100 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:42.649137 1437114 retry.go:31] will retry after 20.728553891s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:42.936771 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:43.435958 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:43.936674 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:44.436708 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:44.936177 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:45.436620 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:45.936616 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:46.436000 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:46.936141 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:47.435976 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:47.936139 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:48.436162 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:48.936736 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:49.436154 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:49.936192 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:50.436517 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:50.936806 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:51.436499 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:51.485950 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1209 05:51:51.548585 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:51.548614 1437114 retry.go:31] will retry after 47.596790172s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:51.936087 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:52.108051 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1209 05:51:52.167486 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:52.167519 1437114 retry.go:31] will retry after 29.777424896s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:51:52.436906 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:52.936203 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:53.436751 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:53.936576 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:54.436593 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:54.935988 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:55.436246 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:51:55.436382 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:51:55.467996 1437114 cri.go:89] found id: ""
	I1209 05:51:55.468084 1437114 logs.go:282] 0 containers: []
	W1209 05:51:55.468107 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:51:55.468125 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:51:55.468223 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:51:55.504401 1437114 cri.go:89] found id: ""
	I1209 05:51:55.504427 1437114 logs.go:282] 0 containers: []
	W1209 05:51:55.504434 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:51:55.504440 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:51:55.504513 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:51:55.530581 1437114 cri.go:89] found id: ""
	I1209 05:51:55.530606 1437114 logs.go:282] 0 containers: []
	W1209 05:51:55.530615 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:51:55.530621 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:51:55.530689 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:51:55.555637 1437114 cri.go:89] found id: ""
	I1209 05:51:55.555708 1437114 logs.go:282] 0 containers: []
	W1209 05:51:55.555744 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:51:55.555768 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:51:55.555867 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:51:55.582108 1437114 cri.go:89] found id: ""
	I1209 05:51:55.582132 1437114 logs.go:282] 0 containers: []
	W1209 05:51:55.582141 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:51:55.582148 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:51:55.582242 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:51:55.606067 1437114 cri.go:89] found id: ""
	I1209 05:51:55.606092 1437114 logs.go:282] 0 containers: []
	W1209 05:51:55.606101 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:51:55.606119 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:51:55.606179 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:51:55.632387 1437114 cri.go:89] found id: ""
	I1209 05:51:55.632413 1437114 logs.go:282] 0 containers: []
	W1209 05:51:55.632422 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:51:55.632428 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:51:55.632489 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:51:55.657181 1437114 cri.go:89] found id: ""
	I1209 05:51:55.657207 1437114 logs.go:282] 0 containers: []
	W1209 05:51:55.657215 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:51:55.657224 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:51:55.657236 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:51:55.718829 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:51:55.710893    1841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:51:55.711561    1841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:51:55.713071    1841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:51:55.713520    1841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:51:55.714997    1841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:51:55.710893    1841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:51:55.711561    1841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:51:55.713071    1841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:51:55.713520    1841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:51:55.714997    1841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:51:55.718849 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:51:55.718861 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:51:55.745044 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:51:55.745076 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:51:55.779273 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:51:55.779300 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:51:55.836724 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:51:55.836759 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:51:58.354526 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:51:58.364806 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:51:58.364873 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:51:58.394168 1437114 cri.go:89] found id: ""
	I1209 05:51:58.394193 1437114 logs.go:282] 0 containers: []
	W1209 05:51:58.394201 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:51:58.394213 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:51:58.394269 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:51:58.419742 1437114 cri.go:89] found id: ""
	I1209 05:51:58.419776 1437114 logs.go:282] 0 containers: []
	W1209 05:51:58.419785 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:51:58.419792 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:51:58.419859 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:51:58.464612 1437114 cri.go:89] found id: ""
	I1209 05:51:58.464637 1437114 logs.go:282] 0 containers: []
	W1209 05:51:58.464646 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:51:58.464652 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:51:58.464707 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:51:58.496121 1437114 cri.go:89] found id: ""
	I1209 05:51:58.496148 1437114 logs.go:282] 0 containers: []
	W1209 05:51:58.496157 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:51:58.496163 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:51:58.496259 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:51:58.520390 1437114 cri.go:89] found id: ""
	I1209 05:51:58.520429 1437114 logs.go:282] 0 containers: []
	W1209 05:51:58.520439 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:51:58.520452 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:51:58.520531 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:51:58.546795 1437114 cri.go:89] found id: ""
	I1209 05:51:58.546828 1437114 logs.go:282] 0 containers: []
	W1209 05:51:58.546838 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:51:58.546847 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:51:58.546911 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:51:58.570252 1437114 cri.go:89] found id: ""
	I1209 05:51:58.570279 1437114 logs.go:282] 0 containers: []
	W1209 05:51:58.570289 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:51:58.570295 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:51:58.570359 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:51:58.594153 1437114 cri.go:89] found id: ""
	I1209 05:51:58.594178 1437114 logs.go:282] 0 containers: []
	W1209 05:51:58.594187 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:51:58.594195 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:51:58.594207 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:51:58.621218 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:51:58.621244 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:51:58.675840 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:51:58.675877 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:51:58.691699 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:51:58.691734 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:51:58.755150 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:51:58.747260    1968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:51:58.747839    1968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:51:58.749288    1968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:51:58.749743    1968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:51:58.751180    1968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:51:58.747260    1968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:51:58.747839    1968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:51:58.749288    1968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:51:58.749743    1968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:51:58.751180    1968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:51:58.755171 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:51:58.755185 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:52:01.281475 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:52:01.293255 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:52:01.293329 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:52:01.318701 1437114 cri.go:89] found id: ""
	I1209 05:52:01.318740 1437114 logs.go:282] 0 containers: []
	W1209 05:52:01.318749 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:52:01.318757 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:52:01.318827 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:52:01.343120 1437114 cri.go:89] found id: ""
	I1209 05:52:01.343145 1437114 logs.go:282] 0 containers: []
	W1209 05:52:01.343154 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:52:01.343170 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:52:01.343228 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:52:01.367699 1437114 cri.go:89] found id: ""
	I1209 05:52:01.367725 1437114 logs.go:282] 0 containers: []
	W1209 05:52:01.367733 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:52:01.367749 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:52:01.367823 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:52:01.394578 1437114 cri.go:89] found id: ""
	I1209 05:52:01.394603 1437114 logs.go:282] 0 containers: []
	W1209 05:52:01.394612 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:52:01.394618 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:52:01.394677 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:52:01.423264 1437114 cri.go:89] found id: ""
	I1209 05:52:01.423290 1437114 logs.go:282] 0 containers: []
	W1209 05:52:01.423299 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:52:01.423305 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:52:01.423367 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:52:01.460737 1437114 cri.go:89] found id: ""
	I1209 05:52:01.460764 1437114 logs.go:282] 0 containers: []
	W1209 05:52:01.460772 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:52:01.460778 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:52:01.460850 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:52:01.493246 1437114 cri.go:89] found id: ""
	I1209 05:52:01.493272 1437114 logs.go:282] 0 containers: []
	W1209 05:52:01.493281 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:52:01.493287 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:52:01.493364 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:52:01.517585 1437114 cri.go:89] found id: ""
	I1209 05:52:01.517612 1437114 logs.go:282] 0 containers: []
	W1209 05:52:01.517620 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:52:01.517630 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:52:01.517670 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:52:01.579907 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:52:01.571951    2063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:01.572467    2063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:01.574150    2063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:01.574485    2063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:01.575978    2063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:52:01.571951    2063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:01.572467    2063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:01.574150    2063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:01.574485    2063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:01.575978    2063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:52:01.579934 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:52:01.579951 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:52:01.605933 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:52:01.605968 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:52:01.633450 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:52:01.633476 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:52:01.690768 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:52:01.690809 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:52:03.378312 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1209 05:52:03.443761 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:52:03.443892 1437114 retry.go:31] will retry after 46.030372913s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:52:04.208154 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:52:04.218947 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:52:04.219023 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:52:04.250185 1437114 cri.go:89] found id: ""
	I1209 05:52:04.250210 1437114 logs.go:282] 0 containers: []
	W1209 05:52:04.250219 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:52:04.250226 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:52:04.250336 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:52:04.278437 1437114 cri.go:89] found id: ""
	I1209 05:52:04.278462 1437114 logs.go:282] 0 containers: []
	W1209 05:52:04.278471 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:52:04.278477 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:52:04.278540 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:52:04.306148 1437114 cri.go:89] found id: ""
	I1209 05:52:04.306212 1437114 logs.go:282] 0 containers: []
	W1209 05:52:04.306227 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:52:04.306235 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:52:04.306294 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:52:04.330968 1437114 cri.go:89] found id: ""
	I1209 05:52:04.330995 1437114 logs.go:282] 0 containers: []
	W1209 05:52:04.331003 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:52:04.331014 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:52:04.331074 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:52:04.361139 1437114 cri.go:89] found id: ""
	I1209 05:52:04.361213 1437114 logs.go:282] 0 containers: []
	W1209 05:52:04.361228 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:52:04.361235 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:52:04.361292 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:52:04.384663 1437114 cri.go:89] found id: ""
	I1209 05:52:04.384728 1437114 logs.go:282] 0 containers: []
	W1209 05:52:04.384744 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:52:04.384751 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:52:04.384819 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:52:04.409163 1437114 cri.go:89] found id: ""
	I1209 05:52:04.409188 1437114 logs.go:282] 0 containers: []
	W1209 05:52:04.409196 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:52:04.409202 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:52:04.409260 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:52:04.438875 1437114 cri.go:89] found id: ""
	I1209 05:52:04.438901 1437114 logs.go:282] 0 containers: []
	W1209 05:52:04.438911 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:52:04.438920 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:52:04.438930 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:52:04.504081 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:52:04.504118 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:52:04.520282 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:52:04.520314 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:52:04.582173 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:52:04.574497    2189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:04.575080    2189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:04.576516    2189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:04.576898    2189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:04.578287    2189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:52:04.574497    2189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:04.575080    2189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:04.576516    2189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:04.576898    2189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:04.578287    2189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:52:04.582197 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:52:04.582209 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:52:04.607423 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:52:04.607456 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:52:07.139347 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:52:07.149801 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:52:07.149872 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:52:07.174952 1437114 cri.go:89] found id: ""
	I1209 05:52:07.174980 1437114 logs.go:282] 0 containers: []
	W1209 05:52:07.174988 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:52:07.174995 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:52:07.175054 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:52:07.202325 1437114 cri.go:89] found id: ""
	I1209 05:52:07.202387 1437114 logs.go:282] 0 containers: []
	W1209 05:52:07.202418 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:52:07.202437 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:52:07.202533 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:52:07.232008 1437114 cri.go:89] found id: ""
	I1209 05:52:07.232092 1437114 logs.go:282] 0 containers: []
	W1209 05:52:07.232147 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:52:07.232170 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:52:07.232265 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:52:07.259048 1437114 cri.go:89] found id: ""
	I1209 05:52:07.259075 1437114 logs.go:282] 0 containers: []
	W1209 05:52:07.259084 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:52:07.259091 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:52:07.259147 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:52:07.283135 1437114 cri.go:89] found id: ""
	I1209 05:52:07.283161 1437114 logs.go:282] 0 containers: []
	W1209 05:52:07.283169 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:52:07.283175 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:52:07.283285 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:52:07.307259 1437114 cri.go:89] found id: ""
	I1209 05:52:07.307285 1437114 logs.go:282] 0 containers: []
	W1209 05:52:07.307294 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:52:07.307300 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:52:07.307357 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:52:07.331534 1437114 cri.go:89] found id: ""
	I1209 05:52:07.331604 1437114 logs.go:282] 0 containers: []
	W1209 05:52:07.331627 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:52:07.331645 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:52:07.331742 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:52:07.358525 1437114 cri.go:89] found id: ""
	I1209 05:52:07.358548 1437114 logs.go:282] 0 containers: []
	W1209 05:52:07.358557 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:52:07.358565 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:52:07.358577 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:52:07.424932 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:52:07.417064    2295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:07.417623    2295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:07.419222    2295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:07.419698    2295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:07.421122    2295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:52:07.417064    2295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:07.417623    2295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:07.419222    2295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:07.419698    2295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:07.421122    2295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:52:07.425003 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:52:07.425028 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:52:07.452549 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:52:07.452633 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:52:07.488600 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:52:07.488675 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:52:07.547568 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:52:07.547604 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:52:10.063961 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:52:10.075421 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:52:10.075510 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:52:10.106279 1437114 cri.go:89] found id: ""
	I1209 05:52:10.106307 1437114 logs.go:282] 0 containers: []
	W1209 05:52:10.106317 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:52:10.106323 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:52:10.106395 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:52:10.140825 1437114 cri.go:89] found id: ""
	I1209 05:52:10.140865 1437114 logs.go:282] 0 containers: []
	W1209 05:52:10.140874 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:52:10.140881 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:52:10.140961 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:52:10.166337 1437114 cri.go:89] found id: ""
	I1209 05:52:10.166364 1437114 logs.go:282] 0 containers: []
	W1209 05:52:10.166373 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:52:10.166380 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:52:10.166460 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:52:10.202390 1437114 cri.go:89] found id: ""
	I1209 05:52:10.202417 1437114 logs.go:282] 0 containers: []
	W1209 05:52:10.202426 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:52:10.202432 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:52:10.202541 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:52:10.230690 1437114 cri.go:89] found id: ""
	I1209 05:52:10.230716 1437114 logs.go:282] 0 containers: []
	W1209 05:52:10.230726 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:52:10.230733 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:52:10.230847 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:52:10.257345 1437114 cri.go:89] found id: ""
	I1209 05:52:10.257371 1437114 logs.go:282] 0 containers: []
	W1209 05:52:10.257380 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:52:10.257386 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:52:10.257452 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:52:10.282028 1437114 cri.go:89] found id: ""
	I1209 05:52:10.282053 1437114 logs.go:282] 0 containers: []
	W1209 05:52:10.282062 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:52:10.282069 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:52:10.282136 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:52:10.306484 1437114 cri.go:89] found id: ""
	I1209 05:52:10.306509 1437114 logs.go:282] 0 containers: []
	W1209 05:52:10.306519 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:52:10.306538 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:52:10.306550 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:52:10.334032 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:52:10.334059 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:52:10.396200 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:52:10.396241 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:52:10.412481 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:52:10.412513 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:52:10.512214 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:52:10.503459    2422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:10.504106    2422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:10.505795    2422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:10.506184    2422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:10.507800    2422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:52:10.503459    2422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:10.504106    2422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:10.505795    2422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:10.506184    2422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:10.507800    2422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:52:10.512237 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:52:10.512250 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:52:13.038285 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:52:13.048783 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:52:13.048856 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:52:13.073147 1437114 cri.go:89] found id: ""
	I1209 05:52:13.073174 1437114 logs.go:282] 0 containers: []
	W1209 05:52:13.073182 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:52:13.073189 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:52:13.073264 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:52:13.096887 1437114 cri.go:89] found id: ""
	I1209 05:52:13.096911 1437114 logs.go:282] 0 containers: []
	W1209 05:52:13.096919 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:52:13.096926 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:52:13.096983 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:52:13.120441 1437114 cri.go:89] found id: ""
	I1209 05:52:13.120466 1437114 logs.go:282] 0 containers: []
	W1209 05:52:13.120475 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:52:13.120482 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:52:13.120540 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:52:13.144403 1437114 cri.go:89] found id: ""
	I1209 05:52:13.144478 1437114 logs.go:282] 0 containers: []
	W1209 05:52:13.144494 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:52:13.144504 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:52:13.144576 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:52:13.168584 1437114 cri.go:89] found id: ""
	I1209 05:52:13.168610 1437114 logs.go:282] 0 containers: []
	W1209 05:52:13.168619 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:52:13.168626 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:52:13.168683 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:52:13.204797 1437114 cri.go:89] found id: ""
	I1209 05:52:13.204824 1437114 logs.go:282] 0 containers: []
	W1209 05:52:13.204833 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:52:13.204840 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:52:13.204899 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:52:13.231178 1437114 cri.go:89] found id: ""
	I1209 05:52:13.231205 1437114 logs.go:282] 0 containers: []
	W1209 05:52:13.231214 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:52:13.231220 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:52:13.231278 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:52:13.260307 1437114 cri.go:89] found id: ""
	I1209 05:52:13.260331 1437114 logs.go:282] 0 containers: []
	W1209 05:52:13.260341 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:52:13.260350 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:52:13.260361 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:52:13.286145 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:52:13.286182 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:52:13.315119 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:52:13.315147 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:52:13.369862 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:52:13.369894 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:52:13.385795 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:52:13.385822 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:52:13.451305 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:52:13.443201    2536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:13.444044    2536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:13.445720    2536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:13.446006    2536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:13.447466    2536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:52:13.443201    2536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:13.444044    2536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:13.445720    2536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:13.446006    2536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:13.447466    2536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:52:15.952193 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:52:15.962440 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:52:15.962511 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:52:15.990421 1437114 cri.go:89] found id: ""
	I1209 05:52:15.990444 1437114 logs.go:282] 0 containers: []
	W1209 05:52:15.990452 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:52:15.990459 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:52:15.990527 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:52:16.025731 1437114 cri.go:89] found id: ""
	I1209 05:52:16.025759 1437114 logs.go:282] 0 containers: []
	W1209 05:52:16.025768 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:52:16.025775 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:52:16.025850 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:52:16.051150 1437114 cri.go:89] found id: ""
	I1209 05:52:16.051184 1437114 logs.go:282] 0 containers: []
	W1209 05:52:16.051193 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:52:16.051199 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:52:16.051269 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:52:16.080315 1437114 cri.go:89] found id: ""
	I1209 05:52:16.080343 1437114 logs.go:282] 0 containers: []
	W1209 05:52:16.080352 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:52:16.080358 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:52:16.080421 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:52:16.106254 1437114 cri.go:89] found id: ""
	I1209 05:52:16.106329 1437114 logs.go:282] 0 containers: []
	W1209 05:52:16.106344 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:52:16.106351 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:52:16.106419 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:52:16.130691 1437114 cri.go:89] found id: ""
	I1209 05:52:16.130717 1437114 logs.go:282] 0 containers: []
	W1209 05:52:16.130726 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:52:16.130732 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:52:16.130788 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:52:16.156232 1437114 cri.go:89] found id: ""
	I1209 05:52:16.156257 1437114 logs.go:282] 0 containers: []
	W1209 05:52:16.156266 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:52:16.156272 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:52:16.156333 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:52:16.186070 1437114 cri.go:89] found id: ""
	I1209 05:52:16.186091 1437114 logs.go:282] 0 containers: []
	W1209 05:52:16.186100 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:52:16.186109 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:52:16.186121 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:52:16.203551 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:52:16.203579 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:52:16.280037 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:52:16.272128    2631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:16.272800    2631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:16.274272    2631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:16.274686    2631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:16.276185    2631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:52:16.272128    2631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:16.272800    2631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:16.274272    2631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:16.274686    2631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:16.276185    2631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:52:16.280087 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:52:16.280102 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:52:16.304445 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:52:16.304479 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:52:16.333574 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:52:16.333599 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:52:18.890807 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:52:18.901129 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:52:18.901207 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:52:18.925553 1437114 cri.go:89] found id: ""
	I1209 05:52:18.925576 1437114 logs.go:282] 0 containers: []
	W1209 05:52:18.925584 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:52:18.925590 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:52:18.925648 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:52:18.951104 1437114 cri.go:89] found id: ""
	I1209 05:52:18.951180 1437114 logs.go:282] 0 containers: []
	W1209 05:52:18.951203 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:52:18.951221 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:52:18.951309 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:52:18.975343 1437114 cri.go:89] found id: ""
	I1209 05:52:18.975407 1437114 logs.go:282] 0 containers: []
	W1209 05:52:18.975432 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:52:18.975450 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:52:18.975535 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:52:18.999522 1437114 cri.go:89] found id: ""
	I1209 05:52:18.999596 1437114 logs.go:282] 0 containers: []
	W1209 05:52:18.999619 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:52:18.999637 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:52:18.999722 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:52:19.025106 1437114 cri.go:89] found id: ""
	I1209 05:52:19.025181 1437114 logs.go:282] 0 containers: []
	W1209 05:52:19.025203 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:52:19.025221 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:52:19.025307 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:52:19.047867 1437114 cri.go:89] found id: ""
	I1209 05:52:19.047944 1437114 logs.go:282] 0 containers: []
	W1209 05:52:19.047966 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:52:19.048006 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:52:19.048106 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:52:19.071487 1437114 cri.go:89] found id: ""
	I1209 05:52:19.071511 1437114 logs.go:282] 0 containers: []
	W1209 05:52:19.071519 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:52:19.071526 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:52:19.071585 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:52:19.096506 1437114 cri.go:89] found id: ""
	I1209 05:52:19.096531 1437114 logs.go:282] 0 containers: []
	W1209 05:52:19.096540 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:52:19.096549 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:52:19.096595 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:52:19.111961 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:52:19.112001 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:52:19.184448 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:52:19.173564    2740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:19.174163    2740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:19.175662    2740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:19.176275    2740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:19.178917    2740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:52:19.173564    2740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:19.174163    2740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:19.175662    2740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:19.176275    2740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:19.178917    2740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:52:19.184473 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:52:19.184487 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:52:19.213109 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:52:19.213148 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:52:19.242001 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:52:19.242036 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:52:21.800441 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:52:21.810634 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:52:21.810706 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:52:21.835147 1437114 cri.go:89] found id: ""
	I1209 05:52:21.835171 1437114 logs.go:282] 0 containers: []
	W1209 05:52:21.835180 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:52:21.835186 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:52:21.835244 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:52:21.863735 1437114 cri.go:89] found id: ""
	I1209 05:52:21.863760 1437114 logs.go:282] 0 containers: []
	W1209 05:52:21.863769 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:52:21.863775 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:52:21.863833 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:52:21.887643 1437114 cri.go:89] found id: ""
	I1209 05:52:21.887667 1437114 logs.go:282] 0 containers: []
	W1209 05:52:21.887676 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:52:21.887682 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:52:21.887738 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:52:21.912358 1437114 cri.go:89] found id: ""
	I1209 05:52:21.912384 1437114 logs.go:282] 0 containers: []
	W1209 05:52:21.912392 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:52:21.912399 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:52:21.912458 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:52:21.941394 1437114 cri.go:89] found id: ""
	I1209 05:52:21.941420 1437114 logs.go:282] 0 containers: []
	W1209 05:52:21.941429 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:52:21.941435 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:52:21.941521 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:52:21.945768 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 05:52:21.973669 1437114 cri.go:89] found id: ""
	I1209 05:52:21.973703 1437114 logs.go:282] 0 containers: []
	W1209 05:52:21.973712 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:52:21.973734 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:52:21.973814 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	W1209 05:52:22.028092 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 05:52:22.028115 1437114 cri.go:89] found id: ""
	I1209 05:52:22.028247 1437114 logs.go:282] 0 containers: []
	W1209 05:52:22.028256 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:52:22.028268 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	W1209 05:52:22.028296 1437114 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1209 05:52:22.028335 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:52:22.054827 1437114 cri.go:89] found id: ""
	I1209 05:52:22.054854 1437114 logs.go:282] 0 containers: []
	W1209 05:52:22.054862 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:52:22.054871 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:52:22.054883 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:52:22.081941 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:52:22.081985 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:52:22.109801 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:52:22.109829 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:52:22.167418 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:52:22.167455 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:52:22.186947 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:52:22.187039 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:52:22.274107 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:52:22.265076    2876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:22.265712    2876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:22.267349    2876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:22.267990    2876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:22.269553    2876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:52:22.265076    2876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:22.265712    2876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:22.267349    2876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:22.267990    2876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:22.269553    2876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:52:24.774371 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:52:24.785291 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:52:24.785383 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:52:24.810496 1437114 cri.go:89] found id: ""
	I1209 05:52:24.810521 1437114 logs.go:282] 0 containers: []
	W1209 05:52:24.810530 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:52:24.810537 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:52:24.810641 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:52:24.840246 1437114 cri.go:89] found id: ""
	I1209 05:52:24.840283 1437114 logs.go:282] 0 containers: []
	W1209 05:52:24.840292 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:52:24.840298 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:52:24.840383 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:52:24.866227 1437114 cri.go:89] found id: ""
	I1209 05:52:24.866252 1437114 logs.go:282] 0 containers: []
	W1209 05:52:24.866267 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:52:24.866274 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:52:24.866334 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:52:24.894487 1437114 cri.go:89] found id: ""
	I1209 05:52:24.894512 1437114 logs.go:282] 0 containers: []
	W1209 05:52:24.894521 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:52:24.894528 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:52:24.894592 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:52:24.919081 1437114 cri.go:89] found id: ""
	I1209 05:52:24.919106 1437114 logs.go:282] 0 containers: []
	W1209 05:52:24.919115 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:52:24.919122 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:52:24.919182 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:52:24.942639 1437114 cri.go:89] found id: ""
	I1209 05:52:24.942664 1437114 logs.go:282] 0 containers: []
	W1209 05:52:24.942673 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:52:24.942679 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:52:24.942736 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:52:24.966811 1437114 cri.go:89] found id: ""
	I1209 05:52:24.966835 1437114 logs.go:282] 0 containers: []
	W1209 05:52:24.966844 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:52:24.966849 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:52:24.966906 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:52:24.990491 1437114 cri.go:89] found id: ""
	I1209 05:52:24.990515 1437114 logs.go:282] 0 containers: []
	W1209 05:52:24.990524 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:52:24.990533 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:52:24.990544 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:52:25.049211 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:52:25.049244 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:52:25.065441 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:52:25.065469 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:52:25.128713 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:52:25.120700    2971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:25.121283    2971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:25.122776    2971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:25.123296    2971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:25.124752    2971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:52:25.120700    2971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:25.121283    2971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:25.122776    2971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:25.123296    2971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:25.124752    2971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:52:25.128735 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:52:25.128750 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:52:25.154485 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:52:25.154518 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:52:27.686448 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:52:27.697271 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:52:27.697388 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:52:27.723850 1437114 cri.go:89] found id: ""
	I1209 05:52:27.723930 1437114 logs.go:282] 0 containers: []
	W1209 05:52:27.723953 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:52:27.723970 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:52:27.724082 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:52:27.749864 1437114 cri.go:89] found id: ""
	I1209 05:52:27.749889 1437114 logs.go:282] 0 containers: []
	W1209 05:52:27.749897 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:52:27.749904 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:52:27.749989 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:52:27.773124 1437114 cri.go:89] found id: ""
	I1209 05:52:27.773151 1437114 logs.go:282] 0 containers: []
	W1209 05:52:27.773167 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:52:27.773174 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:52:27.773238 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:52:27.802090 1437114 cri.go:89] found id: ""
	I1209 05:52:27.802118 1437114 logs.go:282] 0 containers: []
	W1209 05:52:27.802128 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:52:27.802134 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:52:27.802193 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:52:27.827324 1437114 cri.go:89] found id: ""
	I1209 05:52:27.827349 1437114 logs.go:282] 0 containers: []
	W1209 05:52:27.827361 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:52:27.827367 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:52:27.827425 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:52:27.855877 1437114 cri.go:89] found id: ""
	I1209 05:52:27.855905 1437114 logs.go:282] 0 containers: []
	W1209 05:52:27.855914 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:52:27.855920 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:52:27.855980 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:52:27.880242 1437114 cri.go:89] found id: ""
	I1209 05:52:27.880322 1437114 logs.go:282] 0 containers: []
	W1209 05:52:27.880346 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:52:27.880365 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:52:27.880457 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:52:27.903986 1437114 cri.go:89] found id: ""
	I1209 05:52:27.904032 1437114 logs.go:282] 0 containers: []
	W1209 05:52:27.904041 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:52:27.904079 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:52:27.904100 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:52:27.937811 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:52:27.937838 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:52:27.993533 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:52:27.993570 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:52:28.010780 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:52:28.010818 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:52:28.075391 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:52:28.066786    3096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:28.067667    3096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:28.069423    3096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:28.069776    3096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:28.071145    3096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:52:28.066786    3096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:28.067667    3096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:28.069423    3096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:28.069776    3096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:28.071145    3096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:52:28.075424 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:52:28.075454 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:52:30.602097 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:52:30.612434 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:52:30.612508 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:52:30.638153 1437114 cri.go:89] found id: ""
	I1209 05:52:30.638183 1437114 logs.go:282] 0 containers: []
	W1209 05:52:30.638191 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:52:30.638197 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:52:30.638280 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:52:30.664120 1437114 cri.go:89] found id: ""
	I1209 05:52:30.664206 1437114 logs.go:282] 0 containers: []
	W1209 05:52:30.664221 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:52:30.664229 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:52:30.664291 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:52:30.695098 1437114 cri.go:89] found id: ""
	I1209 05:52:30.695124 1437114 logs.go:282] 0 containers: []
	W1209 05:52:30.695132 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:52:30.695138 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:52:30.695196 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:52:30.728679 1437114 cri.go:89] found id: ""
	I1209 05:52:30.728703 1437114 logs.go:282] 0 containers: []
	W1209 05:52:30.728711 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:52:30.728718 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:52:30.728777 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:52:30.757085 1437114 cri.go:89] found id: ""
	I1209 05:52:30.757108 1437114 logs.go:282] 0 containers: []
	W1209 05:52:30.757116 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:52:30.757122 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:52:30.757190 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:52:30.781813 1437114 cri.go:89] found id: ""
	I1209 05:52:30.781838 1437114 logs.go:282] 0 containers: []
	W1209 05:52:30.781847 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:52:30.781853 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:52:30.781931 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:52:30.805893 1437114 cri.go:89] found id: ""
	I1209 05:52:30.805958 1437114 logs.go:282] 0 containers: []
	W1209 05:52:30.805972 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:52:30.805980 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:52:30.806045 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:52:30.838632 1437114 cri.go:89] found id: ""
	I1209 05:52:30.838657 1437114 logs.go:282] 0 containers: []
	W1209 05:52:30.838666 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:52:30.838675 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:52:30.838686 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:52:30.853978 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:52:30.854004 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:52:30.918110 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:52:30.910818    3197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:30.911400    3197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:30.912432    3197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:30.912927    3197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:30.914407    3197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:52:30.910818    3197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:30.911400    3197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:30.912432    3197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:30.912927    3197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:30.914407    3197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:52:30.918132 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:52:30.918144 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:52:30.943105 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:52:30.943142 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:52:30.969706 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:52:30.969735 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:52:33.525286 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:52:33.535730 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:52:33.535803 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:52:33.559344 1437114 cri.go:89] found id: ""
	I1209 05:52:33.559369 1437114 logs.go:282] 0 containers: []
	W1209 05:52:33.559378 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:52:33.559384 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:52:33.559441 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:52:33.588185 1437114 cri.go:89] found id: ""
	I1209 05:52:33.588254 1437114 logs.go:282] 0 containers: []
	W1209 05:52:33.588278 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:52:33.588292 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:52:33.588366 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:52:33.613255 1437114 cri.go:89] found id: ""
	I1209 05:52:33.613279 1437114 logs.go:282] 0 containers: []
	W1209 05:52:33.613288 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:52:33.613295 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:52:33.613382 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:52:33.636919 1437114 cri.go:89] found id: ""
	I1209 05:52:33.636953 1437114 logs.go:282] 0 containers: []
	W1209 05:52:33.636961 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:52:33.636968 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:52:33.637035 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:52:33.666309 1437114 cri.go:89] found id: ""
	I1209 05:52:33.666342 1437114 logs.go:282] 0 containers: []
	W1209 05:52:33.666351 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:52:33.666358 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:52:33.666424 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:52:33.698208 1437114 cri.go:89] found id: ""
	I1209 05:52:33.698283 1437114 logs.go:282] 0 containers: []
	W1209 05:52:33.698305 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:52:33.698324 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:52:33.698413 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:52:33.730383 1437114 cri.go:89] found id: ""
	I1209 05:52:33.730456 1437114 logs.go:282] 0 containers: []
	W1209 05:52:33.730479 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:52:33.730499 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:52:33.730585 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:52:33.759854 1437114 cri.go:89] found id: ""
	I1209 05:52:33.759930 1437114 logs.go:282] 0 containers: []
	W1209 05:52:33.759952 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:52:33.759972 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:52:33.760007 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:52:33.822572 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:52:33.815081    3306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:33.815468    3306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:33.816948    3306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:33.817250    3306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:33.818729    3306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:52:33.815081    3306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:33.815468    3306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:33.816948    3306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:33.817250    3306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:33.818729    3306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:52:33.822593 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:52:33.822606 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:52:33.848713 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:52:33.848751 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:52:33.875169 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:52:33.875202 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:52:33.929863 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:52:33.929899 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:52:36.446655 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:52:36.457494 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:52:36.457564 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:52:36.489953 1437114 cri.go:89] found id: ""
	I1209 05:52:36.490015 1437114 logs.go:282] 0 containers: []
	W1209 05:52:36.490045 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:52:36.490069 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:52:36.490171 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:52:36.518208 1437114 cri.go:89] found id: ""
	I1209 05:52:36.518232 1437114 logs.go:282] 0 containers: []
	W1209 05:52:36.518240 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:52:36.518246 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:52:36.518303 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:52:36.546757 1437114 cri.go:89] found id: ""
	I1209 05:52:36.546830 1437114 logs.go:282] 0 containers: []
	W1209 05:52:36.546852 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:52:36.546870 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:52:36.546958 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:52:36.573478 1437114 cri.go:89] found id: ""
	I1209 05:52:36.573504 1437114 logs.go:282] 0 containers: []
	W1209 05:52:36.573512 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:52:36.573518 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:52:36.573573 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:52:36.597359 1437114 cri.go:89] found id: ""
	I1209 05:52:36.597384 1437114 logs.go:282] 0 containers: []
	W1209 05:52:36.597392 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:52:36.597399 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:52:36.597456 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:52:36.626723 1437114 cri.go:89] found id: ""
	I1209 05:52:36.626750 1437114 logs.go:282] 0 containers: []
	W1209 05:52:36.626758 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:52:36.626765 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:52:36.626821 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:52:36.651878 1437114 cri.go:89] found id: ""
	I1209 05:52:36.651904 1437114 logs.go:282] 0 containers: []
	W1209 05:52:36.651913 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:52:36.651920 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:52:36.651983 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:52:36.677687 1437114 cri.go:89] found id: ""
	I1209 05:52:36.677763 1437114 logs.go:282] 0 containers: []
	W1209 05:52:36.677786 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:52:36.677806 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:52:36.677844 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:52:36.762388 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:52:36.754574    3419 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:36.755265    3419 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:36.756812    3419 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:36.757117    3419 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:36.758563    3419 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:52:36.754574    3419 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:36.755265    3419 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:36.756812    3419 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:36.757117    3419 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:36.758563    3419 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:52:36.762408 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:52:36.762421 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:52:36.787210 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:52:36.787245 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:52:36.813523 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:52:36.813549 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:52:36.871098 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:52:36.871134 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:52:39.145660 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1209 05:52:39.203856 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 05:52:39.203957 1437114 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1209 05:52:39.388175 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:52:39.398492 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:52:39.398583 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:52:39.425881 1437114 cri.go:89] found id: ""
	I1209 05:52:39.425914 1437114 logs.go:282] 0 containers: []
	W1209 05:52:39.425924 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:52:39.425930 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:52:39.425998 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:52:39.450356 1437114 cri.go:89] found id: ""
	I1209 05:52:39.450390 1437114 logs.go:282] 0 containers: []
	W1209 05:52:39.450399 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:52:39.450405 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:52:39.450472 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:52:39.482441 1437114 cri.go:89] found id: ""
	I1209 05:52:39.482475 1437114 logs.go:282] 0 containers: []
	W1209 05:52:39.482483 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:52:39.482490 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:52:39.482554 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:52:39.512577 1437114 cri.go:89] found id: ""
	I1209 05:52:39.512602 1437114 logs.go:282] 0 containers: []
	W1209 05:52:39.512611 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:52:39.512617 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:52:39.512674 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:52:39.537514 1437114 cri.go:89] found id: ""
	I1209 05:52:39.537539 1437114 logs.go:282] 0 containers: []
	W1209 05:52:39.537547 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:52:39.537559 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:52:39.537620 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:52:39.561319 1437114 cri.go:89] found id: ""
	I1209 05:52:39.561352 1437114 logs.go:282] 0 containers: []
	W1209 05:52:39.561360 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:52:39.561366 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:52:39.561442 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:52:39.589300 1437114 cri.go:89] found id: ""
	I1209 05:52:39.589324 1437114 logs.go:282] 0 containers: []
	W1209 05:52:39.589333 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:52:39.589339 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:52:39.589398 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:52:39.620288 1437114 cri.go:89] found id: ""
	I1209 05:52:39.620312 1437114 logs.go:282] 0 containers: []
	W1209 05:52:39.620321 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:52:39.620339 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:52:39.620351 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:52:39.678215 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:52:39.678293 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:52:39.697337 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:52:39.697364 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:52:39.767115 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:52:39.758981    3543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:39.759384    3543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:39.761296    3543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:39.761699    3543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:39.763232    3543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:52:39.758981    3543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:39.759384    3543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:39.761296    3543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:39.761699    3543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:39.763232    3543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:52:39.767135 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:52:39.767147 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:52:39.791949 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:52:39.791985 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:52:42.324195 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:52:42.339508 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:52:42.339591 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:52:42.370155 1437114 cri.go:89] found id: ""
	I1209 05:52:42.370181 1437114 logs.go:282] 0 containers: []
	W1209 05:52:42.370192 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:52:42.370199 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:52:42.370268 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:52:42.395020 1437114 cri.go:89] found id: ""
	I1209 05:52:42.395054 1437114 logs.go:282] 0 containers: []
	W1209 05:52:42.395063 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:52:42.395069 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:52:42.395136 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:52:42.423571 1437114 cri.go:89] found id: ""
	I1209 05:52:42.423604 1437114 logs.go:282] 0 containers: []
	W1209 05:52:42.423612 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:52:42.423618 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:52:42.423684 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:52:42.449744 1437114 cri.go:89] found id: ""
	I1209 05:52:42.449821 1437114 logs.go:282] 0 containers: []
	W1209 05:52:42.449846 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:52:42.449865 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:52:42.449951 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:52:42.476838 1437114 cri.go:89] found id: ""
	I1209 05:52:42.476864 1437114 logs.go:282] 0 containers: []
	W1209 05:52:42.476872 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:52:42.476879 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:52:42.476957 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:52:42.505251 1437114 cri.go:89] found id: ""
	I1209 05:52:42.505278 1437114 logs.go:282] 0 containers: []
	W1209 05:52:42.505287 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:52:42.505294 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:52:42.505372 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:52:42.529646 1437114 cri.go:89] found id: ""
	I1209 05:52:42.529712 1437114 logs.go:282] 0 containers: []
	W1209 05:52:42.529728 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:52:42.529741 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:52:42.529803 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:52:42.553792 1437114 cri.go:89] found id: ""
	I1209 05:52:42.553818 1437114 logs.go:282] 0 containers: []
	W1209 05:52:42.553827 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:52:42.553836 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:52:42.553865 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:52:42.610712 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:52:42.610750 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:52:42.626470 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:52:42.626498 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:52:42.691633 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:52:42.681192    3655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:42.683916    3655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:42.685453    3655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:42.685744    3655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:42.687188    3655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:52:42.681192    3655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:42.683916    3655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:42.685453    3655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:42.685744    3655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:42.687188    3655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:52:42.691658 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:52:42.691672 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:52:42.721023 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:52:42.721056 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:52:45.257072 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:52:45.279876 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:52:45.279970 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:52:45.310797 1437114 cri.go:89] found id: ""
	I1209 05:52:45.310822 1437114 logs.go:282] 0 containers: []
	W1209 05:52:45.310831 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:52:45.310837 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:52:45.310915 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:52:45.339967 1437114 cri.go:89] found id: ""
	I1209 05:52:45.339990 1437114 logs.go:282] 0 containers: []
	W1209 05:52:45.339999 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:52:45.340004 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:52:45.340083 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:52:45.368323 1437114 cri.go:89] found id: ""
	I1209 05:52:45.368351 1437114 logs.go:282] 0 containers: []
	W1209 05:52:45.368360 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:52:45.368368 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:52:45.368427 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:52:45.393892 1437114 cri.go:89] found id: ""
	I1209 05:52:45.393918 1437114 logs.go:282] 0 containers: []
	W1209 05:52:45.393926 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:52:45.393932 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:52:45.393995 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:52:45.418992 1437114 cri.go:89] found id: ""
	I1209 05:52:45.419025 1437114 logs.go:282] 0 containers: []
	W1209 05:52:45.419035 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:52:45.419041 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:52:45.419107 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:52:45.461356 1437114 cri.go:89] found id: ""
	I1209 05:52:45.461392 1437114 logs.go:282] 0 containers: []
	W1209 05:52:45.461401 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:52:45.461407 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:52:45.461481 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:52:45.493718 1437114 cri.go:89] found id: ""
	I1209 05:52:45.493753 1437114 logs.go:282] 0 containers: []
	W1209 05:52:45.493762 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:52:45.493768 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:52:45.493836 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:52:45.517850 1437114 cri.go:89] found id: ""
	I1209 05:52:45.517876 1437114 logs.go:282] 0 containers: []
	W1209 05:52:45.517898 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:52:45.517907 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:52:45.517922 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:52:45.576699 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:52:45.576736 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:52:45.592339 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:52:45.592368 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:52:45.660368 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:52:45.651938    3766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:45.652711    3766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:45.654414    3766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:45.654934    3766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:45.656559    3766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:52:45.651938    3766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:45.652711    3766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:45.654414    3766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:45.654934    3766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:45.656559    3766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:52:45.660391 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:52:45.660404 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:52:45.687142 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:52:45.687222 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:52:48.227261 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:52:48.237593 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:52:48.237680 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:52:48.260468 1437114 cri.go:89] found id: ""
	I1209 05:52:48.260493 1437114 logs.go:282] 0 containers: []
	W1209 05:52:48.260502 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:52:48.260509 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:52:48.260570 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:52:48.289034 1437114 cri.go:89] found id: ""
	I1209 05:52:48.289059 1437114 logs.go:282] 0 containers: []
	W1209 05:52:48.289068 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:52:48.289074 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:52:48.289150 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:52:48.316323 1437114 cri.go:89] found id: ""
	I1209 05:52:48.316349 1437114 logs.go:282] 0 containers: []
	W1209 05:52:48.316358 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:52:48.316364 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:52:48.316434 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:52:48.342218 1437114 cri.go:89] found id: ""
	I1209 05:52:48.342240 1437114 logs.go:282] 0 containers: []
	W1209 05:52:48.342249 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:52:48.342255 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:52:48.342308 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:52:48.371363 1437114 cri.go:89] found id: ""
	I1209 05:52:48.371390 1437114 logs.go:282] 0 containers: []
	W1209 05:52:48.371399 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:52:48.371406 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:52:48.371466 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:52:48.395178 1437114 cri.go:89] found id: ""
	I1209 05:52:48.395204 1437114 logs.go:282] 0 containers: []
	W1209 05:52:48.395212 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:52:48.395218 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:52:48.395274 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:52:48.419670 1437114 cri.go:89] found id: ""
	I1209 05:52:48.419709 1437114 logs.go:282] 0 containers: []
	W1209 05:52:48.419718 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:52:48.419740 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:52:48.419825 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:52:48.461924 1437114 cri.go:89] found id: ""
	I1209 05:52:48.461946 1437114 logs.go:282] 0 containers: []
	W1209 05:52:48.461954 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:52:48.461963 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:52:48.461974 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:52:48.528889 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:52:48.528926 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:52:48.544946 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:52:48.544976 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:52:48.610447 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:52:48.602428    3879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:48.603193    3879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:48.604673    3879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:48.605169    3879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:48.606641    3879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:52:48.602428    3879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:48.603193    3879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:48.604673    3879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:48.605169    3879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:48.606641    3879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:52:48.610466 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:52:48.610478 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:52:48.636193 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:52:48.636232 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:52:49.474531 1437114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1209 05:52:49.539382 1437114 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 05:52:49.539481 1437114 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1209 05:52:49.543501 1437114 out.go:179] * Enabled addons: 
	I1209 05:52:49.546285 1437114 addons.go:530] duration metric: took 1m54.169473068s for enable addons: enabled=[]
	I1209 05:52:51.163525 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:52:51.174339 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:52:51.174465 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:52:51.198800 1437114 cri.go:89] found id: ""
	I1209 05:52:51.198828 1437114 logs.go:282] 0 containers: []
	W1209 05:52:51.198837 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:52:51.198843 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:52:51.198901 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:52:51.224524 1437114 cri.go:89] found id: ""
	I1209 05:52:51.224552 1437114 logs.go:282] 0 containers: []
	W1209 05:52:51.224561 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:52:51.224568 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:52:51.224626 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:52:51.249032 1437114 cri.go:89] found id: ""
	I1209 05:52:51.249099 1437114 logs.go:282] 0 containers: []
	W1209 05:52:51.249122 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:52:51.249136 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:52:51.249210 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:52:51.272901 1437114 cri.go:89] found id: ""
	I1209 05:52:51.272929 1437114 logs.go:282] 0 containers: []
	W1209 05:52:51.272937 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:52:51.272950 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:52:51.273011 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:52:51.296909 1437114 cri.go:89] found id: ""
	I1209 05:52:51.296935 1437114 logs.go:282] 0 containers: []
	W1209 05:52:51.296943 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:52:51.296949 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:52:51.297007 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:52:51.325419 1437114 cri.go:89] found id: ""
	I1209 05:52:51.325499 1437114 logs.go:282] 0 containers: []
	W1209 05:52:51.325522 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:52:51.325537 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:52:51.325609 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:52:51.350449 1437114 cri.go:89] found id: ""
	I1209 05:52:51.350475 1437114 logs.go:282] 0 containers: []
	W1209 05:52:51.350484 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:52:51.350490 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:52:51.350571 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:52:51.378459 1437114 cri.go:89] found id: ""
	I1209 05:52:51.378482 1437114 logs.go:282] 0 containers: []
	W1209 05:52:51.378490 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:52:51.378501 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:52:51.378512 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:52:51.439032 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:52:51.439075 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:52:51.457325 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:52:51.457355 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:52:51.525486 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:52:51.517693    3998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:51.518243    3998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:51.519766    3998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:51.520306    3998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:51.521762    3998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:52:51.517693    3998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:51.518243    3998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:51.519766    3998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:51.520306    3998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:51.521762    3998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:52:51.525549 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:52:51.525570 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:52:51.551425 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:52:51.551463 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:52:54.078624 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:52:54.089324 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:52:54.089395 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:52:54.117819 1437114 cri.go:89] found id: ""
	I1209 05:52:54.117840 1437114 logs.go:282] 0 containers: []
	W1209 05:52:54.117856 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:52:54.117863 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:52:54.117923 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:52:54.143006 1437114 cri.go:89] found id: ""
	I1209 05:52:54.143083 1437114 logs.go:282] 0 containers: []
	W1209 05:52:54.143105 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:52:54.143125 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:52:54.143200 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:52:54.168655 1437114 cri.go:89] found id: ""
	I1209 05:52:54.168715 1437114 logs.go:282] 0 containers: []
	W1209 05:52:54.168742 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:52:54.168758 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:52:54.168847 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:52:54.193433 1437114 cri.go:89] found id: ""
	I1209 05:52:54.193459 1437114 logs.go:282] 0 containers: []
	W1209 05:52:54.193467 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:52:54.193474 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:52:54.193558 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:52:54.216587 1437114 cri.go:89] found id: ""
	I1209 05:52:54.216663 1437114 logs.go:282] 0 containers: []
	W1209 05:52:54.216686 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:52:54.216700 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:52:54.216775 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:52:54.240686 1437114 cri.go:89] found id: ""
	I1209 05:52:54.240723 1437114 logs.go:282] 0 containers: []
	W1209 05:52:54.240732 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:52:54.240739 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:52:54.240830 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:52:54.264680 1437114 cri.go:89] found id: ""
	I1209 05:52:54.264710 1437114 logs.go:282] 0 containers: []
	W1209 05:52:54.264719 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:52:54.264725 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:52:54.264785 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:52:54.288715 1437114 cri.go:89] found id: ""
	I1209 05:52:54.288739 1437114 logs.go:282] 0 containers: []
	W1209 05:52:54.288748 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:52:54.288757 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:52:54.288769 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:52:54.344591 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:52:54.344629 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:52:54.360275 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:52:54.360350 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:52:54.422057 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:52:54.413842    4106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:54.414541    4106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:54.416178    4106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:54.416655    4106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:54.418204    4106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:52:54.413842    4106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:54.414541    4106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:54.416178    4106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:54.416655    4106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:54.418204    4106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:52:54.422081 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:52:54.422093 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:52:54.451978 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:52:54.452157 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:52:56.987228 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:52:56.997370 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:52:56.997440 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:52:57.026856 1437114 cri.go:89] found id: ""
	I1209 05:52:57.026878 1437114 logs.go:282] 0 containers: []
	W1209 05:52:57.026886 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:52:57.026893 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:52:57.026955 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:52:57.052417 1437114 cri.go:89] found id: ""
	I1209 05:52:57.052442 1437114 logs.go:282] 0 containers: []
	W1209 05:52:57.052450 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:52:57.052457 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:52:57.052517 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:52:57.079492 1437114 cri.go:89] found id: ""
	I1209 05:52:57.079516 1437114 logs.go:282] 0 containers: []
	W1209 05:52:57.079526 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:52:57.079532 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:52:57.079590 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:52:57.103111 1437114 cri.go:89] found id: ""
	I1209 05:52:57.103135 1437114 logs.go:282] 0 containers: []
	W1209 05:52:57.103144 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:52:57.103150 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:52:57.103212 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:52:57.129591 1437114 cri.go:89] found id: ""
	I1209 05:52:57.129616 1437114 logs.go:282] 0 containers: []
	W1209 05:52:57.129624 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:52:57.129631 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:52:57.129706 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:52:57.153092 1437114 cri.go:89] found id: ""
	I1209 05:52:57.153115 1437114 logs.go:282] 0 containers: []
	W1209 05:52:57.153124 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:52:57.153131 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:52:57.153189 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:52:57.177623 1437114 cri.go:89] found id: ""
	I1209 05:52:57.177647 1437114 logs.go:282] 0 containers: []
	W1209 05:52:57.177656 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:52:57.177662 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:52:57.177748 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:52:57.202469 1437114 cri.go:89] found id: ""
	I1209 05:52:57.202493 1437114 logs.go:282] 0 containers: []
	W1209 05:52:57.202502 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:52:57.202511 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:52:57.202550 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:52:57.260356 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:52:57.260393 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:52:57.276459 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:52:57.276539 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:52:57.343015 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:52:57.335090    4217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:57.335845    4217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:57.337423    4217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:57.337717    4217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:57.339202    4217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:52:57.335090    4217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:57.335845    4217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:57.337423    4217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:57.337717    4217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:52:57.339202    4217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:52:57.343037 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:52:57.343052 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:52:57.368448 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:52:57.368485 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:52:59.899132 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:52:59.909390 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:52:59.909502 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:52:59.942228 1437114 cri.go:89] found id: ""
	I1209 05:52:59.942299 1437114 logs.go:282] 0 containers: []
	W1209 05:52:59.942333 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:52:59.942354 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:52:59.942464 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:52:59.967993 1437114 cri.go:89] found id: ""
	I1209 05:52:59.968090 1437114 logs.go:282] 0 containers: []
	W1209 05:52:59.968105 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:52:59.968112 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:52:59.968183 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:53:00.004409 1437114 cri.go:89] found id: ""
	I1209 05:53:00.004444 1437114 logs.go:282] 0 containers: []
	W1209 05:53:00.004453 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:53:00.004461 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:53:00.004542 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:53:00.122181 1437114 cri.go:89] found id: ""
	I1209 05:53:00.122206 1437114 logs.go:282] 0 containers: []
	W1209 05:53:00.122216 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:53:00.122238 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:53:00.122319 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:53:00.178386 1437114 cri.go:89] found id: ""
	I1209 05:53:00.178469 1437114 logs.go:282] 0 containers: []
	W1209 05:53:00.178481 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:53:00.178488 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:53:00.178720 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:53:00.226314 1437114 cri.go:89] found id: ""
	I1209 05:53:00.226451 1437114 logs.go:282] 0 containers: []
	W1209 05:53:00.226477 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:53:00.226486 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:53:00.226568 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:53:00.271734 1437114 cri.go:89] found id: ""
	I1209 05:53:00.271771 1437114 logs.go:282] 0 containers: []
	W1209 05:53:00.271782 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:53:00.271790 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:53:00.271932 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:53:00.335362 1437114 cri.go:89] found id: ""
	I1209 05:53:00.335448 1437114 logs.go:282] 0 containers: []
	W1209 05:53:00.335466 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:53:00.335477 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:53:00.335493 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:53:00.365642 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:53:00.365684 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:53:00.400318 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:53:00.400349 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:53:00.462709 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:53:00.462752 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:53:00.480156 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:53:00.480188 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:53:00.548948 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:53:00.540982    4347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:00.541655    4347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:00.543286    4347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:00.543662    4347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:00.545115    4347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:53:00.540982    4347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:00.541655    4347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:00.543286    4347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:00.543662    4347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:00.545115    4347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:53:03.050610 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:53:03.061297 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:53:03.061406 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:53:03.090201 1437114 cri.go:89] found id: ""
	I1209 05:53:03.090232 1437114 logs.go:282] 0 containers: []
	W1209 05:53:03.090240 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:53:03.090248 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:53:03.090313 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:53:03.115399 1437114 cri.go:89] found id: ""
	I1209 05:53:03.115424 1437114 logs.go:282] 0 containers: []
	W1209 05:53:03.115432 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:53:03.115438 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:53:03.115497 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:53:03.138652 1437114 cri.go:89] found id: ""
	I1209 05:53:03.138685 1437114 logs.go:282] 0 containers: []
	W1209 05:53:03.138694 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:53:03.138700 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:53:03.138771 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:53:03.163354 1437114 cri.go:89] found id: ""
	I1209 05:53:03.163387 1437114 logs.go:282] 0 containers: []
	W1209 05:53:03.163396 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:53:03.163402 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:53:03.163467 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:53:03.189982 1437114 cri.go:89] found id: ""
	I1209 05:53:03.190008 1437114 logs.go:282] 0 containers: []
	W1209 05:53:03.190016 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:53:03.190023 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:53:03.190100 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:53:03.214072 1437114 cri.go:89] found id: ""
	I1209 05:53:03.214100 1437114 logs.go:282] 0 containers: []
	W1209 05:53:03.214109 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:53:03.214115 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:53:03.214193 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:53:03.238571 1437114 cri.go:89] found id: ""
	I1209 05:53:03.238605 1437114 logs.go:282] 0 containers: []
	W1209 05:53:03.238614 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:53:03.238620 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:53:03.238713 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:53:03.262760 1437114 cri.go:89] found id: ""
	I1209 05:53:03.262791 1437114 logs.go:282] 0 containers: []
	W1209 05:53:03.262800 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:53:03.262825 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:53:03.262848 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:53:03.278402 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:53:03.278430 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:53:03.340382 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:53:03.332086    4439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:03.332485    4439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:03.334108    4439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:03.334685    4439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:03.336430    4439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:53:03.332086    4439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:03.332485    4439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:03.334108    4439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:03.334685    4439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:03.336430    4439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:53:03.340405 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:53:03.340420 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:53:03.367157 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:53:03.367193 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:53:03.394767 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:53:03.394794 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:53:05.953212 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:53:05.965657 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:53:05.965739 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:53:06.020272 1437114 cri.go:89] found id: ""
	I1209 05:53:06.020296 1437114 logs.go:282] 0 containers: []
	W1209 05:53:06.020305 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:53:06.020311 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:53:06.020379 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:53:06.045735 1437114 cri.go:89] found id: ""
	I1209 05:53:06.045757 1437114 logs.go:282] 0 containers: []
	W1209 05:53:06.045766 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:53:06.045772 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:53:06.045832 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:53:06.072090 1437114 cri.go:89] found id: ""
	I1209 05:53:06.072119 1437114 logs.go:282] 0 containers: []
	W1209 05:53:06.072129 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:53:06.072136 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:53:06.072225 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:53:06.097096 1437114 cri.go:89] found id: ""
	I1209 05:53:06.097121 1437114 logs.go:282] 0 containers: []
	W1209 05:53:06.097130 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:53:06.097137 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:53:06.097214 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:53:06.121406 1437114 cri.go:89] found id: ""
	I1209 05:53:06.121431 1437114 logs.go:282] 0 containers: []
	W1209 05:53:06.121439 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:53:06.121446 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:53:06.121503 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:53:06.146550 1437114 cri.go:89] found id: ""
	I1209 05:53:06.146585 1437114 logs.go:282] 0 containers: []
	W1209 05:53:06.146594 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:53:06.146601 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:53:06.146667 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:53:06.173744 1437114 cri.go:89] found id: ""
	I1209 05:53:06.173779 1437114 logs.go:282] 0 containers: []
	W1209 05:53:06.173788 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:53:06.173794 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:53:06.173852 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:53:06.196867 1437114 cri.go:89] found id: ""
	I1209 05:53:06.196892 1437114 logs.go:282] 0 containers: []
	W1209 05:53:06.196901 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:53:06.196911 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:53:06.196922 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:53:06.252507 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:53:06.252544 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:53:06.268558 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:53:06.268588 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:53:06.335400 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:53:06.327269    4553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:06.327995    4553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:06.329562    4553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:06.330075    4553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:06.331590    4553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:53:06.327269    4553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:06.327995    4553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:06.329562    4553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:06.330075    4553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:06.331590    4553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:53:06.335432 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:53:06.335445 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:53:06.361277 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:53:06.361311 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:53:08.892899 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:53:08.903128 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:53:08.903197 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:53:08.927271 1437114 cri.go:89] found id: ""
	I1209 05:53:08.927347 1437114 logs.go:282] 0 containers: []
	W1209 05:53:08.927363 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:53:08.927371 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:53:08.927437 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:53:08.958272 1437114 cri.go:89] found id: ""
	I1209 05:53:08.958296 1437114 logs.go:282] 0 containers: []
	W1209 05:53:08.958305 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:53:08.958312 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:53:08.958389 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:53:08.992109 1437114 cri.go:89] found id: ""
	I1209 05:53:08.992174 1437114 logs.go:282] 0 containers: []
	W1209 05:53:08.992196 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:53:08.992217 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:53:08.992284 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:53:09.021977 1437114 cri.go:89] found id: ""
	I1209 05:53:09.022053 1437114 logs.go:282] 0 containers: []
	W1209 05:53:09.022069 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:53:09.022076 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:53:09.022135 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:53:09.045707 1437114 cri.go:89] found id: ""
	I1209 05:53:09.045731 1437114 logs.go:282] 0 containers: []
	W1209 05:53:09.045739 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:53:09.045745 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:53:09.045801 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:53:09.070070 1437114 cri.go:89] found id: ""
	I1209 05:53:09.070103 1437114 logs.go:282] 0 containers: []
	W1209 05:53:09.070112 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:53:09.070118 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:53:09.070186 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:53:09.094488 1437114 cri.go:89] found id: ""
	I1209 05:53:09.094513 1437114 logs.go:282] 0 containers: []
	W1209 05:53:09.094530 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:53:09.094537 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:53:09.094606 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:53:09.118093 1437114 cri.go:89] found id: ""
	I1209 05:53:09.118132 1437114 logs.go:282] 0 containers: []
	W1209 05:53:09.118141 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:53:09.118150 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:53:09.118161 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:53:09.179308 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:53:09.171279    4659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:09.171791    4659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:09.173320    4659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:09.173784    4659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:09.175502    4659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:53:09.171279    4659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:09.171791    4659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:09.173320    4659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:09.173784    4659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:09.175502    4659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:53:09.179376 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:53:09.179404 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:53:09.204829 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:53:09.204867 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:53:09.232053 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:53:09.232131 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:53:09.292412 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:53:09.292453 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:53:11.810473 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:53:11.820642 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:53:11.820731 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:53:11.844911 1437114 cri.go:89] found id: ""
	I1209 05:53:11.844935 1437114 logs.go:282] 0 containers: []
	W1209 05:53:11.844944 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:53:11.844951 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:53:11.845057 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:53:11.868554 1437114 cri.go:89] found id: ""
	I1209 05:53:11.868628 1437114 logs.go:282] 0 containers: []
	W1209 05:53:11.868642 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:53:11.868649 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:53:11.868713 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:53:11.893204 1437114 cri.go:89] found id: ""
	I1209 05:53:11.893229 1437114 logs.go:282] 0 containers: []
	W1209 05:53:11.893237 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:53:11.893243 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:53:11.893307 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:53:11.922205 1437114 cri.go:89] found id: ""
	I1209 05:53:11.922235 1437114 logs.go:282] 0 containers: []
	W1209 05:53:11.922244 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:53:11.922250 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:53:11.922314 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:53:11.969099 1437114 cri.go:89] found id: ""
	I1209 05:53:11.969172 1437114 logs.go:282] 0 containers: []
	W1209 05:53:11.969195 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:53:11.969222 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:53:11.969335 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:53:11.999668 1437114 cri.go:89] found id: ""
	I1209 05:53:11.999694 1437114 logs.go:282] 0 containers: []
	W1209 05:53:11.999702 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:53:11.999709 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:53:11.999798 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:53:12.027989 1437114 cri.go:89] found id: ""
	I1209 05:53:12.028053 1437114 logs.go:282] 0 containers: []
	W1209 05:53:12.028062 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:53:12.028083 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:53:12.028182 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:53:12.060174 1437114 cri.go:89] found id: ""
	I1209 05:53:12.060202 1437114 logs.go:282] 0 containers: []
	W1209 05:53:12.060211 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:53:12.060220 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:53:12.060260 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:53:12.121282 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:53:12.121323 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:53:12.137566 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:53:12.137595 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:53:12.205667 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:53:12.197778    4774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:12.198341    4774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:12.199791    4774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:12.200371    4774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:12.201936    4774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:53:12.197778    4774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:12.198341    4774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:12.199791    4774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:12.200371    4774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:12.201936    4774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:53:12.205687 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:53:12.205700 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:53:12.230499 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:53:12.230532 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:53:14.761775 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:53:14.772764 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:53:14.772836 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:53:14.796366 1437114 cri.go:89] found id: ""
	I1209 05:53:14.796391 1437114 logs.go:282] 0 containers: []
	W1209 05:53:14.796399 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:53:14.796406 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:53:14.796479 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:53:14.821766 1437114 cri.go:89] found id: ""
	I1209 05:53:14.821793 1437114 logs.go:282] 0 containers: []
	W1209 05:53:14.821802 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:53:14.821808 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:53:14.821868 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:53:14.846798 1437114 cri.go:89] found id: ""
	I1209 05:53:14.846823 1437114 logs.go:282] 0 containers: []
	W1209 05:53:14.846832 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:53:14.846838 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:53:14.846896 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:53:14.870638 1437114 cri.go:89] found id: ""
	I1209 05:53:14.870668 1437114 logs.go:282] 0 containers: []
	W1209 05:53:14.870677 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:53:14.870683 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:53:14.870741 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:53:14.894543 1437114 cri.go:89] found id: ""
	I1209 05:53:14.894571 1437114 logs.go:282] 0 containers: []
	W1209 05:53:14.894580 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:53:14.894586 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:53:14.894650 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:53:14.918572 1437114 cri.go:89] found id: ""
	I1209 05:53:14.918601 1437114 logs.go:282] 0 containers: []
	W1209 05:53:14.918610 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:53:14.918617 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:53:14.918699 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:53:14.947884 1437114 cri.go:89] found id: ""
	I1209 05:53:14.947914 1437114 logs.go:282] 0 containers: []
	W1209 05:53:14.947922 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:53:14.947928 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:53:14.948004 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:53:14.989982 1437114 cri.go:89] found id: ""
	I1209 05:53:14.990055 1437114 logs.go:282] 0 containers: []
	W1209 05:53:14.990078 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:53:14.990099 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:53:14.990137 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:53:15.012208 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:53:15.012307 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:53:15.086674 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:53:15.078145    4882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:15.078866    4882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:15.080581    4882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:15.081087    4882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:15.082649    4882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:53:15.078145    4882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:15.078866    4882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:15.080581    4882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:15.081087    4882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:15.082649    4882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:53:15.086740 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:53:15.086766 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:53:15.112587 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:53:15.112623 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:53:15.141472 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:53:15.141502 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:53:17.701838 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:53:17.713895 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:53:17.713963 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:53:17.745334 1437114 cri.go:89] found id: ""
	I1209 05:53:17.745357 1437114 logs.go:282] 0 containers: []
	W1209 05:53:17.745366 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:53:17.745372 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:53:17.745470 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:53:17.770153 1437114 cri.go:89] found id: ""
	I1209 05:53:17.770220 1437114 logs.go:282] 0 containers: []
	W1209 05:53:17.770244 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:53:17.770263 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:53:17.770326 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:53:17.795244 1437114 cri.go:89] found id: ""
	I1209 05:53:17.795278 1437114 logs.go:282] 0 containers: []
	W1209 05:53:17.795287 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:53:17.795293 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:53:17.795388 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:53:17.822017 1437114 cri.go:89] found id: ""
	I1209 05:53:17.822040 1437114 logs.go:282] 0 containers: []
	W1209 05:53:17.822049 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:53:17.822055 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:53:17.822132 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:53:17.850510 1437114 cri.go:89] found id: ""
	I1209 05:53:17.850532 1437114 logs.go:282] 0 containers: []
	W1209 05:53:17.850541 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:53:17.850566 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:53:17.850624 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:53:17.875231 1437114 cri.go:89] found id: ""
	I1209 05:53:17.875314 1437114 logs.go:282] 0 containers: []
	W1209 05:53:17.875337 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:53:17.875359 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:53:17.875488 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:53:17.901146 1437114 cri.go:89] found id: ""
	I1209 05:53:17.901169 1437114 logs.go:282] 0 containers: []
	W1209 05:53:17.901178 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:53:17.901207 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:53:17.901291 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:53:17.924362 1437114 cri.go:89] found id: ""
	I1209 05:53:17.924386 1437114 logs.go:282] 0 containers: []
	W1209 05:53:17.924395 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:53:17.924404 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:53:17.924415 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:53:17.987361 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:53:17.987403 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:53:18.004290 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:53:18.004323 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:53:18.072148 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:53:18.062877    5000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:18.063667    5000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:18.065532    5000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:18.066146    5000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:18.067899    5000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:53:18.062877    5000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:18.063667    5000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:18.065532    5000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:18.066146    5000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:18.067899    5000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:53:18.072181 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:53:18.072194 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:53:18.098033 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:53:18.098071 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:53:20.625561 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:53:20.635963 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:53:20.636053 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:53:20.659961 1437114 cri.go:89] found id: ""
	I1209 05:53:20.659984 1437114 logs.go:282] 0 containers: []
	W1209 05:53:20.659994 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:53:20.660000 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:53:20.660075 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:53:20.690085 1437114 cri.go:89] found id: ""
	I1209 05:53:20.690119 1437114 logs.go:282] 0 containers: []
	W1209 05:53:20.690128 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:53:20.690134 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:53:20.690199 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:53:20.722202 1437114 cri.go:89] found id: ""
	I1209 05:53:20.722238 1437114 logs.go:282] 0 containers: []
	W1209 05:53:20.722247 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:53:20.722254 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:53:20.722319 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:53:20.754033 1437114 cri.go:89] found id: ""
	I1209 05:53:20.754057 1437114 logs.go:282] 0 containers: []
	W1209 05:53:20.754066 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:53:20.754073 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:53:20.754157 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:53:20.778306 1437114 cri.go:89] found id: ""
	I1209 05:53:20.778332 1437114 logs.go:282] 0 containers: []
	W1209 05:53:20.778341 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:53:20.778349 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:53:20.778427 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:53:20.802477 1437114 cri.go:89] found id: ""
	I1209 05:53:20.802501 1437114 logs.go:282] 0 containers: []
	W1209 05:53:20.802510 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:53:20.802516 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:53:20.802605 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:53:20.833205 1437114 cri.go:89] found id: ""
	I1209 05:53:20.833231 1437114 logs.go:282] 0 containers: []
	W1209 05:53:20.833239 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:53:20.833246 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:53:20.833310 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:53:20.858107 1437114 cri.go:89] found id: ""
	I1209 05:53:20.858172 1437114 logs.go:282] 0 containers: []
	W1209 05:53:20.858188 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:53:20.858198 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:53:20.858209 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:53:20.914050 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:53:20.914088 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:53:20.930297 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:53:20.930326 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:53:21.009735 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:53:20.998811    5112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:20.999637    5112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:21.001322    5112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:21.001871    5112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:21.003770    5112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:53:20.998811    5112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:20.999637    5112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:21.001322    5112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:21.001871    5112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:21.003770    5112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:53:21.009759 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:53:21.009772 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:53:21.035653 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:53:21.035687 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:53:23.563248 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:53:23.574010 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:53:23.574087 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:53:23.603557 1437114 cri.go:89] found id: ""
	I1209 05:53:23.603583 1437114 logs.go:282] 0 containers: []
	W1209 05:53:23.603593 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:53:23.603599 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:53:23.603658 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:53:23.629927 1437114 cri.go:89] found id: ""
	I1209 05:53:23.629953 1437114 logs.go:282] 0 containers: []
	W1209 05:53:23.629961 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:53:23.629967 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:53:23.630029 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:53:23.654017 1437114 cri.go:89] found id: ""
	I1209 05:53:23.654042 1437114 logs.go:282] 0 containers: []
	W1209 05:53:23.654050 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:53:23.654057 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:53:23.654114 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:53:23.681104 1437114 cri.go:89] found id: ""
	I1209 05:53:23.681126 1437114 logs.go:282] 0 containers: []
	W1209 05:53:23.681134 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:53:23.681140 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:53:23.681210 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:53:23.717733 1437114 cri.go:89] found id: ""
	I1209 05:53:23.717754 1437114 logs.go:282] 0 containers: []
	W1209 05:53:23.717763 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:53:23.717769 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:53:23.717826 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:53:23.746697 1437114 cri.go:89] found id: ""
	I1209 05:53:23.746718 1437114 logs.go:282] 0 containers: []
	W1209 05:53:23.746727 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:53:23.746734 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:53:23.746791 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:53:23.771013 1437114 cri.go:89] found id: ""
	I1209 05:53:23.771035 1437114 logs.go:282] 0 containers: []
	W1209 05:53:23.771043 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:53:23.771049 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:53:23.771110 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:53:23.797671 1437114 cri.go:89] found id: ""
	I1209 05:53:23.797695 1437114 logs.go:282] 0 containers: []
	W1209 05:53:23.797705 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:53:23.797714 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:53:23.797727 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:53:23.863004 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:53:23.854866    5216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:23.855647    5216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:23.857241    5216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:23.857752    5216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:23.859306    5216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:53:23.854866    5216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:23.855647    5216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:23.857241    5216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:23.857752    5216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:23.859306    5216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:53:23.863025 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:53:23.863039 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:53:23.888849 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:53:23.888886 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:53:23.918103 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:53:23.918129 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:53:23.981103 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:53:23.981139 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:53:26.502565 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:53:26.513114 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:53:26.513204 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:53:26.536286 1437114 cri.go:89] found id: ""
	I1209 05:53:26.536352 1437114 logs.go:282] 0 containers: []
	W1209 05:53:26.536366 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:53:26.536373 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:53:26.536448 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:53:26.567137 1437114 cri.go:89] found id: ""
	I1209 05:53:26.567165 1437114 logs.go:282] 0 containers: []
	W1209 05:53:26.567174 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:53:26.567181 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:53:26.567255 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:53:26.593992 1437114 cri.go:89] found id: ""
	I1209 05:53:26.594018 1437114 logs.go:282] 0 containers: []
	W1209 05:53:26.594027 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:53:26.594033 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:53:26.594112 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:53:26.622318 1437114 cri.go:89] found id: ""
	I1209 05:53:26.622341 1437114 logs.go:282] 0 containers: []
	W1209 05:53:26.622349 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:53:26.622356 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:53:26.622436 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:53:26.647615 1437114 cri.go:89] found id: ""
	I1209 05:53:26.647689 1437114 logs.go:282] 0 containers: []
	W1209 05:53:26.647724 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:53:26.647744 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:53:26.647837 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:53:26.672100 1437114 cri.go:89] found id: ""
	I1209 05:53:26.672174 1437114 logs.go:282] 0 containers: []
	W1209 05:53:26.672189 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:53:26.672197 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:53:26.672268 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:53:26.702289 1437114 cri.go:89] found id: ""
	I1209 05:53:26.702322 1437114 logs.go:282] 0 containers: []
	W1209 05:53:26.702331 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:53:26.702355 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:53:26.702438 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:53:26.732737 1437114 cri.go:89] found id: ""
	I1209 05:53:26.732807 1437114 logs.go:282] 0 containers: []
	W1209 05:53:26.732831 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:53:26.732855 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:53:26.732894 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:53:26.749702 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:53:26.749778 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:53:26.813476 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:53:26.805499    5331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:26.805968    5331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:26.807499    5331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:26.807884    5331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:26.809521    5331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:53:26.805499    5331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:26.805968    5331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:26.807499    5331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:26.807884    5331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:26.809521    5331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:53:26.813510 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:53:26.813524 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:53:26.839545 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:53:26.839583 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:53:26.866441 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:53:26.866469 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:53:29.424166 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:53:29.435921 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:53:29.435993 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:53:29.462038 1437114 cri.go:89] found id: ""
	I1209 05:53:29.462060 1437114 logs.go:282] 0 containers: []
	W1209 05:53:29.462068 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:53:29.462074 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:53:29.462134 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:53:29.485671 1437114 cri.go:89] found id: ""
	I1209 05:53:29.485695 1437114 logs.go:282] 0 containers: []
	W1209 05:53:29.485704 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:53:29.485710 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:53:29.485765 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:53:29.508799 1437114 cri.go:89] found id: ""
	I1209 05:53:29.508829 1437114 logs.go:282] 0 containers: []
	W1209 05:53:29.508838 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:53:29.508844 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:53:29.508910 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:53:29.533027 1437114 cri.go:89] found id: ""
	I1209 05:53:29.533052 1437114 logs.go:282] 0 containers: []
	W1209 05:53:29.533060 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:53:29.533066 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:53:29.533151 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:53:29.565784 1437114 cri.go:89] found id: ""
	I1209 05:53:29.565811 1437114 logs.go:282] 0 containers: []
	W1209 05:53:29.565819 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:53:29.565825 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:53:29.565882 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:53:29.590917 1437114 cri.go:89] found id: ""
	I1209 05:53:29.590943 1437114 logs.go:282] 0 containers: []
	W1209 05:53:29.590951 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:53:29.590957 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:53:29.591014 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:53:29.618282 1437114 cri.go:89] found id: ""
	I1209 05:53:29.618307 1437114 logs.go:282] 0 containers: []
	W1209 05:53:29.618316 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:53:29.618322 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:53:29.618381 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:53:29.646902 1437114 cri.go:89] found id: ""
	I1209 05:53:29.646936 1437114 logs.go:282] 0 containers: []
	W1209 05:53:29.646946 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:53:29.646955 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:53:29.646973 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:53:29.707743 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:53:29.707828 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:53:29.724421 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:53:29.724499 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:53:29.794074 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:53:29.785906    5445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:29.786405    5445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:29.787873    5445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:29.788573    5445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:29.790227    5445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:53:29.785906    5445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:29.786405    5445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:29.787873    5445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:29.788573    5445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:29.790227    5445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:53:29.794139 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:53:29.794180 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:53:29.820222 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:53:29.820259 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:53:32.350724 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:53:32.361228 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:53:32.361300 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:53:32.389541 1437114 cri.go:89] found id: ""
	I1209 05:53:32.389564 1437114 logs.go:282] 0 containers: []
	W1209 05:53:32.389572 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:53:32.389578 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:53:32.389637 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:53:32.412985 1437114 cri.go:89] found id: ""
	I1209 05:53:32.413008 1437114 logs.go:282] 0 containers: []
	W1209 05:53:32.413017 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:53:32.413023 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:53:32.413100 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:53:32.436603 1437114 cri.go:89] found id: ""
	I1209 05:53:32.436628 1437114 logs.go:282] 0 containers: []
	W1209 05:53:32.436637 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:53:32.436644 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:53:32.436703 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:53:32.461975 1437114 cri.go:89] found id: ""
	I1209 05:53:32.462039 1437114 logs.go:282] 0 containers: []
	W1209 05:53:32.462053 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:53:32.462060 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:53:32.462122 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:53:32.485536 1437114 cri.go:89] found id: ""
	I1209 05:53:32.485560 1437114 logs.go:282] 0 containers: []
	W1209 05:53:32.485568 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:53:32.485574 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:53:32.485633 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:53:32.509130 1437114 cri.go:89] found id: ""
	I1209 05:53:32.509159 1437114 logs.go:282] 0 containers: []
	W1209 05:53:32.509168 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:53:32.509175 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:53:32.509253 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:53:32.532336 1437114 cri.go:89] found id: ""
	I1209 05:53:32.532366 1437114 logs.go:282] 0 containers: []
	W1209 05:53:32.532374 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:53:32.532381 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:53:32.532465 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:53:32.556282 1437114 cri.go:89] found id: ""
	I1209 05:53:32.556319 1437114 logs.go:282] 0 containers: []
	W1209 05:53:32.556329 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:53:32.556338 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:53:32.556352 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:53:32.572109 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:53:32.572183 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:53:32.633108 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:53:32.624780    5553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:32.625448    5553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:32.627074    5553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:32.627615    5553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:32.629220    5553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:53:32.624780    5553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:32.625448    5553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:32.627074    5553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:32.627615    5553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:32.629220    5553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:53:32.633141 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:53:32.633155 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:53:32.662184 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:53:32.662225 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:53:32.702034 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:53:32.702063 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:53:35.266899 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:53:35.277229 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:53:35.277296 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:53:35.300790 1437114 cri.go:89] found id: ""
	I1209 05:53:35.300814 1437114 logs.go:282] 0 containers: []
	W1209 05:53:35.300823 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:53:35.300830 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:53:35.300892 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:53:35.325182 1437114 cri.go:89] found id: ""
	I1209 05:53:35.325204 1437114 logs.go:282] 0 containers: []
	W1209 05:53:35.325212 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:53:35.325218 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:53:35.325280 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:53:35.353701 1437114 cri.go:89] found id: ""
	I1209 05:53:35.353727 1437114 logs.go:282] 0 containers: []
	W1209 05:53:35.353735 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:53:35.353741 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:53:35.353802 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:53:35.377248 1437114 cri.go:89] found id: ""
	I1209 05:53:35.377272 1437114 logs.go:282] 0 containers: []
	W1209 05:53:35.377281 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:53:35.377288 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:53:35.377347 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:53:35.401542 1437114 cri.go:89] found id: ""
	I1209 05:53:35.401568 1437114 logs.go:282] 0 containers: []
	W1209 05:53:35.401577 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:53:35.401584 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:53:35.401663 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:53:35.426460 1437114 cri.go:89] found id: ""
	I1209 05:53:35.426488 1437114 logs.go:282] 0 containers: []
	W1209 05:53:35.426497 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:53:35.426503 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:53:35.426561 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:53:35.454120 1437114 cri.go:89] found id: ""
	I1209 05:53:35.454145 1437114 logs.go:282] 0 containers: []
	W1209 05:53:35.454154 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:53:35.454160 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:53:35.454217 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:53:35.478639 1437114 cri.go:89] found id: ""
	I1209 05:53:35.478664 1437114 logs.go:282] 0 containers: []
	W1209 05:53:35.478673 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:53:35.478681 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:53:35.478692 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:53:35.504448 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:53:35.504487 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:53:35.533724 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:53:35.533751 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:53:35.589526 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:53:35.589560 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:53:35.605319 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:53:35.605345 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:53:35.676318 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:53:35.668651    5679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:35.669162    5679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:35.670613    5679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:35.671063    5679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:35.672483    5679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:53:35.668651    5679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:35.669162    5679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:35.670613    5679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:35.671063    5679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:35.672483    5679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:53:38.177618 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:53:38.191936 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:53:38.192007 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:53:38.225078 1437114 cri.go:89] found id: ""
	I1209 05:53:38.225117 1437114 logs.go:282] 0 containers: []
	W1209 05:53:38.225126 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:53:38.225133 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:53:38.225204 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:53:38.257246 1437114 cri.go:89] found id: ""
	I1209 05:53:38.257272 1437114 logs.go:282] 0 containers: []
	W1209 05:53:38.257281 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:53:38.257286 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:53:38.257350 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:53:38.286060 1437114 cri.go:89] found id: ""
	I1209 05:53:38.286083 1437114 logs.go:282] 0 containers: []
	W1209 05:53:38.286091 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:53:38.286097 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:53:38.286158 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:53:38.315924 1437114 cri.go:89] found id: ""
	I1209 05:53:38.315989 1437114 logs.go:282] 0 containers: []
	W1209 05:53:38.316050 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:53:38.316081 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:53:38.316148 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:53:38.340319 1437114 cri.go:89] found id: ""
	I1209 05:53:38.340348 1437114 logs.go:282] 0 containers: []
	W1209 05:53:38.340357 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:53:38.340363 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:53:38.340424 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:53:38.365184 1437114 cri.go:89] found id: ""
	I1209 05:53:38.365220 1437114 logs.go:282] 0 containers: []
	W1209 05:53:38.365229 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:53:38.365235 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:53:38.365307 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:53:38.389641 1437114 cri.go:89] found id: ""
	I1209 05:53:38.389720 1437114 logs.go:282] 0 containers: []
	W1209 05:53:38.389744 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:53:38.389759 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:53:38.389832 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:53:38.420280 1437114 cri.go:89] found id: ""
	I1209 05:53:38.420306 1437114 logs.go:282] 0 containers: []
	W1209 05:53:38.420315 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:53:38.420324 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:53:38.420353 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:53:38.476252 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:53:38.476288 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:53:38.492393 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:53:38.492472 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:53:38.557826 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:53:38.549594    5780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:38.550283    5780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:38.551905    5780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:38.552451    5780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:38.553989    5780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:53:38.549594    5780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:38.550283    5780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:38.551905    5780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:38.552451    5780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:38.553989    5780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:53:38.557849 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:53:38.557862 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:53:38.583171 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:53:38.583206 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:53:41.110406 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:53:41.120474 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:53:41.120545 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:53:41.145006 1437114 cri.go:89] found id: ""
	I1209 05:53:41.145030 1437114 logs.go:282] 0 containers: []
	W1209 05:53:41.145038 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:53:41.145044 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:53:41.145100 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:53:41.168892 1437114 cri.go:89] found id: ""
	I1209 05:53:41.168917 1437114 logs.go:282] 0 containers: []
	W1209 05:53:41.168925 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:53:41.168932 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:53:41.168989 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:53:41.206601 1437114 cri.go:89] found id: ""
	I1209 05:53:41.206630 1437114 logs.go:282] 0 containers: []
	W1209 05:53:41.206641 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:53:41.206653 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:53:41.206721 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:53:41.247172 1437114 cri.go:89] found id: ""
	I1209 05:53:41.247204 1437114 logs.go:282] 0 containers: []
	W1209 05:53:41.247212 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:53:41.247219 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:53:41.247276 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:53:41.271589 1437114 cri.go:89] found id: ""
	I1209 05:53:41.271613 1437114 logs.go:282] 0 containers: []
	W1209 05:53:41.271621 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:53:41.271628 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:53:41.271714 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:53:41.298007 1437114 cri.go:89] found id: ""
	I1209 05:53:41.298032 1437114 logs.go:282] 0 containers: []
	W1209 05:53:41.298041 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:53:41.298047 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:53:41.298105 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:53:41.325987 1437114 cri.go:89] found id: ""
	I1209 05:53:41.326010 1437114 logs.go:282] 0 containers: []
	W1209 05:53:41.326025 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:53:41.326050 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:53:41.326131 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:53:41.351424 1437114 cri.go:89] found id: ""
	I1209 05:53:41.351449 1437114 logs.go:282] 0 containers: []
	W1209 05:53:41.351457 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:53:41.351466 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:53:41.351476 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:53:41.376872 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:53:41.376906 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:53:41.405296 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:53:41.405322 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:53:41.461131 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:53:41.461167 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:53:41.477891 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:53:41.477920 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:53:41.546568 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:53:41.537827    5907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:41.538521    5907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:41.540212    5907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:41.540814    5907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:41.542724    5907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:53:41.537827    5907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:41.538521    5907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:41.540212    5907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:41.540814    5907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:41.542724    5907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:53:44.046855 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:53:44.058136 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:53:44.058209 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:53:44.086287 1437114 cri.go:89] found id: ""
	I1209 05:53:44.086311 1437114 logs.go:282] 0 containers: []
	W1209 05:53:44.086320 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:53:44.086326 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:53:44.086390 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:53:44.110388 1437114 cri.go:89] found id: ""
	I1209 05:53:44.110411 1437114 logs.go:282] 0 containers: []
	W1209 05:53:44.110419 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:53:44.110425 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:53:44.110481 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:53:44.134842 1437114 cri.go:89] found id: ""
	I1209 05:53:44.134864 1437114 logs.go:282] 0 containers: []
	W1209 05:53:44.134873 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:53:44.134879 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:53:44.134936 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:53:44.161691 1437114 cri.go:89] found id: ""
	I1209 05:53:44.161716 1437114 logs.go:282] 0 containers: []
	W1209 05:53:44.161725 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:53:44.161732 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:53:44.161789 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:53:44.195302 1437114 cri.go:89] found id: ""
	I1209 05:53:44.195326 1437114 logs.go:282] 0 containers: []
	W1209 05:53:44.195335 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:53:44.195341 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:53:44.195408 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:53:44.225882 1437114 cri.go:89] found id: ""
	I1209 05:53:44.225907 1437114 logs.go:282] 0 containers: []
	W1209 05:53:44.225916 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:53:44.225922 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:53:44.225981 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:53:44.253610 1437114 cri.go:89] found id: ""
	I1209 05:53:44.253636 1437114 logs.go:282] 0 containers: []
	W1209 05:53:44.253645 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:53:44.253655 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:53:44.253734 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:53:44.281815 1437114 cri.go:89] found id: ""
	I1209 05:53:44.281840 1437114 logs.go:282] 0 containers: []
	W1209 05:53:44.281848 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:53:44.281857 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:53:44.281868 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:53:44.339663 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:53:44.339702 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:53:44.355859 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:53:44.355938 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:53:44.429444 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:53:44.421835    6007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:44.422435    6007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:44.423949    6007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:44.424455    6007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:44.425745    6007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:53:44.421835    6007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:44.422435    6007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:44.423949    6007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:44.424455    6007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:44.425745    6007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:53:44.429466 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:53:44.429483 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:53:44.455230 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:53:44.455267 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:53:46.982212 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:53:46.993498 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:53:46.993587 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:53:47.023958 1437114 cri.go:89] found id: ""
	I1209 05:53:47.023982 1437114 logs.go:282] 0 containers: []
	W1209 05:53:47.023991 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:53:47.023997 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:53:47.024069 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:53:47.048879 1437114 cri.go:89] found id: ""
	I1209 05:53:47.048901 1437114 logs.go:282] 0 containers: []
	W1209 05:53:47.048910 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:53:47.048916 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:53:47.048983 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:53:47.073853 1437114 cri.go:89] found id: ""
	I1209 05:53:47.073878 1437114 logs.go:282] 0 containers: []
	W1209 05:53:47.073886 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:53:47.073894 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:53:47.073955 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:53:47.096844 1437114 cri.go:89] found id: ""
	I1209 05:53:47.096869 1437114 logs.go:282] 0 containers: []
	W1209 05:53:47.096877 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:53:47.096884 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:53:47.096945 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:53:47.120160 1437114 cri.go:89] found id: ""
	I1209 05:53:47.120185 1437114 logs.go:282] 0 containers: []
	W1209 05:53:47.120194 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:53:47.120200 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:53:47.120261 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:53:47.145073 1437114 cri.go:89] found id: ""
	I1209 05:53:47.145139 1437114 logs.go:282] 0 containers: []
	W1209 05:53:47.145155 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:53:47.145163 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:53:47.145226 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:53:47.168839 1437114 cri.go:89] found id: ""
	I1209 05:53:47.168862 1437114 logs.go:282] 0 containers: []
	W1209 05:53:47.168870 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:53:47.168878 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:53:47.168956 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:53:47.200241 1437114 cri.go:89] found id: ""
	I1209 05:53:47.200264 1437114 logs.go:282] 0 containers: []
	W1209 05:53:47.200272 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:53:47.200282 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:53:47.200311 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:53:47.261748 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:53:47.261783 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:53:47.277688 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:53:47.277718 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:53:47.342796 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:53:47.334710    6118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:47.335374    6118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:47.336895    6118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:47.337477    6118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:47.338953    6118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:53:47.334710    6118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:47.335374    6118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:47.336895    6118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:47.337477    6118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:47.338953    6118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:53:47.342859 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:53:47.342886 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:53:47.367837 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:53:47.367872 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:53:49.896241 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:53:49.908838 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:53:49.908918 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:53:49.942190 1437114 cri.go:89] found id: ""
	I1209 05:53:49.942212 1437114 logs.go:282] 0 containers: []
	W1209 05:53:49.942221 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:53:49.942226 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:53:49.942387 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:53:49.977371 1437114 cri.go:89] found id: ""
	I1209 05:53:49.977393 1437114 logs.go:282] 0 containers: []
	W1209 05:53:49.977401 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:53:49.977408 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:53:49.977468 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:53:50.002223 1437114 cri.go:89] found id: ""
	I1209 05:53:50.002247 1437114 logs.go:282] 0 containers: []
	W1209 05:53:50.002255 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:53:50.002262 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:53:50.002326 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:53:50.032431 1437114 cri.go:89] found id: ""
	I1209 05:53:50.032458 1437114 logs.go:282] 0 containers: []
	W1209 05:53:50.032467 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:53:50.032474 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:53:50.032535 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:53:50.062289 1437114 cri.go:89] found id: ""
	I1209 05:53:50.062314 1437114 logs.go:282] 0 containers: []
	W1209 05:53:50.062323 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:53:50.062329 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:53:50.062418 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:53:50.088271 1437114 cri.go:89] found id: ""
	I1209 05:53:50.088298 1437114 logs.go:282] 0 containers: []
	W1209 05:53:50.088307 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:53:50.088313 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:53:50.088382 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:53:50.114549 1437114 cri.go:89] found id: ""
	I1209 05:53:50.114629 1437114 logs.go:282] 0 containers: []
	W1209 05:53:50.115120 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:53:50.115137 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:53:50.115209 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:53:50.141196 1437114 cri.go:89] found id: ""
	I1209 05:53:50.141276 1437114 logs.go:282] 0 containers: []
	W1209 05:53:50.141298 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:53:50.141318 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:53:50.141353 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:53:50.198211 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:53:50.198284 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:53:50.215943 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:53:50.216047 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:53:50.281793 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:53:50.272885    6231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:50.273579    6231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:50.275295    6231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:50.275902    6231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:50.277606    6231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:53:50.272885    6231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:50.273579    6231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:50.275295    6231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:50.275902    6231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:50.277606    6231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:53:50.281814 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:53:50.281826 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:53:50.308006 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:53:50.308052 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:53:52.837556 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:53:52.848136 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:53:52.848208 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:53:52.872274 1437114 cri.go:89] found id: ""
	I1209 05:53:52.872302 1437114 logs.go:282] 0 containers: []
	W1209 05:53:52.872310 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:53:52.872317 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:53:52.872375 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:53:52.899101 1437114 cri.go:89] found id: ""
	I1209 05:53:52.899125 1437114 logs.go:282] 0 containers: []
	W1209 05:53:52.899134 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:53:52.899140 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:53:52.899199 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:53:52.926800 1437114 cri.go:89] found id: ""
	I1209 05:53:52.926825 1437114 logs.go:282] 0 containers: []
	W1209 05:53:52.926834 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:53:52.926840 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:53:52.926900 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:53:52.962012 1437114 cri.go:89] found id: ""
	I1209 05:53:52.962037 1437114 logs.go:282] 0 containers: []
	W1209 05:53:52.962055 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:53:52.962063 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:53:52.962140 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:53:52.996310 1437114 cri.go:89] found id: ""
	I1209 05:53:52.996336 1437114 logs.go:282] 0 containers: []
	W1209 05:53:52.996345 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:53:52.996351 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:53:52.996410 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:53:53.031535 1437114 cri.go:89] found id: ""
	I1209 05:53:53.031563 1437114 logs.go:282] 0 containers: []
	W1209 05:53:53.031572 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:53:53.031578 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:53:53.031637 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:53:53.059974 1437114 cri.go:89] found id: ""
	I1209 05:53:53.060004 1437114 logs.go:282] 0 containers: []
	W1209 05:53:53.060030 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:53:53.060038 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:53:53.060096 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:53:53.085290 1437114 cri.go:89] found id: ""
	I1209 05:53:53.085356 1437114 logs.go:282] 0 containers: []
	W1209 05:53:53.085386 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:53:53.085403 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:53:53.085415 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:53:53.142442 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:53:53.142477 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:53:53.159141 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:53:53.159169 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:53:53.237761 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:53:53.229474    6343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:53.230237    6343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:53.231874    6343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:53.232214    6343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:53.233652    6343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:53:53.229474    6343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:53.230237    6343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:53.231874    6343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:53.232214    6343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:53.233652    6343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:53:53.237779 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:53:53.237791 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:53:53.265602 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:53:53.265679 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:53:55.800068 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:53:55.810556 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:53:55.810627 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:53:55.836257 1437114 cri.go:89] found id: ""
	I1209 05:53:55.836280 1437114 logs.go:282] 0 containers: []
	W1209 05:53:55.836289 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:53:55.836295 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:53:55.836352 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:53:55.861759 1437114 cri.go:89] found id: ""
	I1209 05:53:55.861783 1437114 logs.go:282] 0 containers: []
	W1209 05:53:55.861792 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:53:55.861798 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:53:55.861865 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:53:55.886950 1437114 cri.go:89] found id: ""
	I1209 05:53:55.886982 1437114 logs.go:282] 0 containers: []
	W1209 05:53:55.886991 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:53:55.886997 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:53:55.887072 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:53:55.912055 1437114 cri.go:89] found id: ""
	I1209 05:53:55.912081 1437114 logs.go:282] 0 containers: []
	W1209 05:53:55.912089 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:53:55.912096 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:53:55.912162 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:53:55.949365 1437114 cri.go:89] found id: ""
	I1209 05:53:55.949431 1437114 logs.go:282] 0 containers: []
	W1209 05:53:55.949455 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:53:55.949471 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:53:55.949545 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:53:55.977916 1437114 cri.go:89] found id: ""
	I1209 05:53:55.977938 1437114 logs.go:282] 0 containers: []
	W1209 05:53:55.977946 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:53:55.977953 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:53:55.978040 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:53:56.013033 1437114 cri.go:89] found id: ""
	I1209 05:53:56.013070 1437114 logs.go:282] 0 containers: []
	W1209 05:53:56.013079 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:53:56.013086 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:53:56.013177 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:53:56.039563 1437114 cri.go:89] found id: ""
	I1209 05:53:56.039610 1437114 logs.go:282] 0 containers: []
	W1209 05:53:56.039620 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:53:56.039629 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:53:56.039641 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:53:56.065976 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:53:56.066014 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:53:56.097703 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:53:56.097732 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:53:56.156555 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:53:56.156594 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:53:56.172549 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:53:56.172576 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:53:56.257220 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:53:56.248866    6469 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:56.249574    6469 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:56.251225    6469 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:56.251719    6469 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:56.253344    6469 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:53:56.248866    6469 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:56.249574    6469 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:56.251225    6469 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:56.251719    6469 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:56.253344    6469 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:53:58.758071 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:53:58.768718 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:53:58.768796 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:53:58.793984 1437114 cri.go:89] found id: ""
	I1209 05:53:58.794007 1437114 logs.go:282] 0 containers: []
	W1209 05:53:58.794015 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:53:58.794021 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:53:58.794078 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:53:58.818550 1437114 cri.go:89] found id: ""
	I1209 05:53:58.818574 1437114 logs.go:282] 0 containers: []
	W1209 05:53:58.818582 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:53:58.818589 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:53:58.818648 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:53:58.843617 1437114 cri.go:89] found id: ""
	I1209 05:53:58.843696 1437114 logs.go:282] 0 containers: []
	W1209 05:53:58.843719 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:53:58.843738 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:53:58.843809 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:53:58.868732 1437114 cri.go:89] found id: ""
	I1209 05:53:58.868754 1437114 logs.go:282] 0 containers: []
	W1209 05:53:58.868763 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:53:58.868769 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:53:58.868823 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:53:58.892930 1437114 cri.go:89] found id: ""
	I1209 05:53:58.892953 1437114 logs.go:282] 0 containers: []
	W1209 05:53:58.892961 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:53:58.892968 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:53:58.893027 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:53:58.917833 1437114 cri.go:89] found id: ""
	I1209 05:53:58.917857 1437114 logs.go:282] 0 containers: []
	W1209 05:53:58.917865 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:53:58.917872 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:53:58.917933 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:53:58.965955 1437114 cri.go:89] found id: ""
	I1209 05:53:58.965982 1437114 logs.go:282] 0 containers: []
	W1209 05:53:58.965990 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:53:58.965996 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:53:58.966054 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:53:58.999708 1437114 cri.go:89] found id: ""
	I1209 05:53:58.999736 1437114 logs.go:282] 0 containers: []
	W1209 05:53:58.999744 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:53:58.999754 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:53:58.999764 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:53:59.065757 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:53:59.057189    6561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:59.058037    6561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:59.059660    6561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:59.060056    6561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:59.061679    6561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:53:59.057189    6561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:59.058037    6561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:59.059660    6561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:59.060056    6561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:53:59.061679    6561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:53:59.065776 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:53:59.065788 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:53:59.090908 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:53:59.090944 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:53:59.118148 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:53:59.118180 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:53:59.175439 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:53:59.175476 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:54:01.697656 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:54:01.712348 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:54:01.712424 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:54:01.743582 1437114 cri.go:89] found id: ""
	I1209 05:54:01.743609 1437114 logs.go:282] 0 containers: []
	W1209 05:54:01.743618 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:54:01.743625 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:54:01.743688 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:54:01.769801 1437114 cri.go:89] found id: ""
	I1209 05:54:01.769825 1437114 logs.go:282] 0 containers: []
	W1209 05:54:01.769834 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:54:01.769840 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:54:01.769896 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:54:01.798274 1437114 cri.go:89] found id: ""
	I1209 05:54:01.798299 1437114 logs.go:282] 0 containers: []
	W1209 05:54:01.798308 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:54:01.798314 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:54:01.798375 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:54:01.827182 1437114 cri.go:89] found id: ""
	I1209 05:54:01.827207 1437114 logs.go:282] 0 containers: []
	W1209 05:54:01.827215 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:54:01.827222 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:54:01.827284 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:54:01.856540 1437114 cri.go:89] found id: ""
	I1209 05:54:01.856564 1437114 logs.go:282] 0 containers: []
	W1209 05:54:01.856573 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:54:01.856579 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:54:01.856659 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:54:01.885694 1437114 cri.go:89] found id: ""
	I1209 05:54:01.885719 1437114 logs.go:282] 0 containers: []
	W1209 05:54:01.885728 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:54:01.885734 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:54:01.885808 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:54:01.915290 1437114 cri.go:89] found id: ""
	I1209 05:54:01.915318 1437114 logs.go:282] 0 containers: []
	W1209 05:54:01.915327 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:54:01.915333 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:54:01.915392 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:54:01.950840 1437114 cri.go:89] found id: ""
	I1209 05:54:01.950869 1437114 logs.go:282] 0 containers: []
	W1209 05:54:01.950878 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:54:01.950888 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:54:01.950899 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:54:02.014414 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:54:02.014453 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:54:02.032051 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:54:02.032135 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:54:02.095629 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:54:02.087393    6683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:02.088084    6683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:02.089580    6683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:02.090087    6683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:02.091647    6683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:54:02.087393    6683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:02.088084    6683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:02.089580    6683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:02.090087    6683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:02.091647    6683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:54:02.095650 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:54:02.095663 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:54:02.122511 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:54:02.122550 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:54:04.650297 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:54:04.660872 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:54:04.660943 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:54:04.687789 1437114 cri.go:89] found id: ""
	I1209 05:54:04.687819 1437114 logs.go:282] 0 containers: []
	W1209 05:54:04.687827 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:54:04.687833 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:54:04.687902 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:54:04.711324 1437114 cri.go:89] found id: ""
	I1209 05:54:04.711349 1437114 logs.go:282] 0 containers: []
	W1209 05:54:04.711357 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:54:04.711364 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:54:04.711423 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:54:04.737863 1437114 cri.go:89] found id: ""
	I1209 05:54:04.737888 1437114 logs.go:282] 0 containers: []
	W1209 05:54:04.737896 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:54:04.737902 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:54:04.737978 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:54:04.762117 1437114 cri.go:89] found id: ""
	I1209 05:54:04.762143 1437114 logs.go:282] 0 containers: []
	W1209 05:54:04.762153 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:54:04.762160 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:54:04.762242 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:54:04.786158 1437114 cri.go:89] found id: ""
	I1209 05:54:04.786181 1437114 logs.go:282] 0 containers: []
	W1209 05:54:04.786189 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:54:04.786195 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:54:04.786252 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:54:04.810657 1437114 cri.go:89] found id: ""
	I1209 05:54:04.810727 1437114 logs.go:282] 0 containers: []
	W1209 05:54:04.810758 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:54:04.810777 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:54:04.810865 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:54:04.835039 1437114 cri.go:89] found id: ""
	I1209 05:54:04.835061 1437114 logs.go:282] 0 containers: []
	W1209 05:54:04.835069 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:54:04.835075 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:54:04.835132 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:54:04.863664 1437114 cri.go:89] found id: ""
	I1209 05:54:04.863691 1437114 logs.go:282] 0 containers: []
	W1209 05:54:04.863704 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:54:04.863713 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:54:04.863724 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:54:04.889846 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:54:04.889882 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:54:04.919060 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:54:04.919086 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:54:04.995975 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:54:04.996070 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:54:05.020220 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:54:05.020254 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:54:05.088696 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:54:05.080290    6811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:05.080797    6811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:05.082535    6811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:05.082897    6811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:05.084443    6811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:54:05.080290    6811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:05.080797    6811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:05.082535    6811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:05.082897    6811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:05.084443    6811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:54:07.590606 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:54:07.601036 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:54:07.601107 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:54:07.626527 1437114 cri.go:89] found id: ""
	I1209 05:54:07.626550 1437114 logs.go:282] 0 containers: []
	W1209 05:54:07.626559 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:54:07.626566 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:54:07.626624 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:54:07.656166 1437114 cri.go:89] found id: ""
	I1209 05:54:07.656193 1437114 logs.go:282] 0 containers: []
	W1209 05:54:07.656201 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:54:07.656207 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:54:07.656272 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:54:07.682014 1437114 cri.go:89] found id: ""
	I1209 05:54:07.682038 1437114 logs.go:282] 0 containers: []
	W1209 05:54:07.682046 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:54:07.682052 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:54:07.682116 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:54:07.707210 1437114 cri.go:89] found id: ""
	I1209 05:54:07.707234 1437114 logs.go:282] 0 containers: []
	W1209 05:54:07.707242 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:54:07.707248 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:54:07.707332 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:54:07.731843 1437114 cri.go:89] found id: ""
	I1209 05:54:07.731868 1437114 logs.go:282] 0 containers: []
	W1209 05:54:07.731877 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:54:07.731892 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:54:07.731958 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:54:07.760321 1437114 cri.go:89] found id: ""
	I1209 05:54:07.760346 1437114 logs.go:282] 0 containers: []
	W1209 05:54:07.760354 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:54:07.760363 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:54:07.760424 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:54:07.786309 1437114 cri.go:89] found id: ""
	I1209 05:54:07.786330 1437114 logs.go:282] 0 containers: []
	W1209 05:54:07.786338 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:54:07.786350 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:54:07.786406 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:54:07.809182 1437114 cri.go:89] found id: ""
	I1209 05:54:07.809216 1437114 logs.go:282] 0 containers: []
	W1209 05:54:07.809225 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:54:07.809233 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:54:07.809244 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:54:07.839994 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:54:07.840050 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:54:07.898120 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:54:07.898152 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:54:07.914130 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:54:07.914234 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:54:08.009314 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:54:07.997479    6916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:07.998081    6916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:07.999634    6916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:08.000228    6916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:08.002087    6916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:54:07.997479    6916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:07.998081    6916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:07.999634    6916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:08.000228    6916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:08.002087    6916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:54:08.009391 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:54:08.009413 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:54:10.536185 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:54:10.547685 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:54:10.547757 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:54:10.571843 1437114 cri.go:89] found id: ""
	I1209 05:54:10.571865 1437114 logs.go:282] 0 containers: []
	W1209 05:54:10.571873 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:54:10.571879 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:54:10.571935 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:54:10.598065 1437114 cri.go:89] found id: ""
	I1209 05:54:10.598092 1437114 logs.go:282] 0 containers: []
	W1209 05:54:10.598101 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:54:10.598107 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:54:10.598165 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:54:10.623072 1437114 cri.go:89] found id: ""
	I1209 05:54:10.623098 1437114 logs.go:282] 0 containers: []
	W1209 05:54:10.623107 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:54:10.623113 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:54:10.623200 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:54:10.649781 1437114 cri.go:89] found id: ""
	I1209 05:54:10.649806 1437114 logs.go:282] 0 containers: []
	W1209 05:54:10.649823 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:54:10.649830 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:54:10.649886 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:54:10.677496 1437114 cri.go:89] found id: ""
	I1209 05:54:10.677529 1437114 logs.go:282] 0 containers: []
	W1209 05:54:10.677538 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:54:10.677544 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:54:10.677603 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:54:10.705951 1437114 cri.go:89] found id: ""
	I1209 05:54:10.705982 1437114 logs.go:282] 0 containers: []
	W1209 05:54:10.705991 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:54:10.705997 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:54:10.706062 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:54:10.730882 1437114 cri.go:89] found id: ""
	I1209 05:54:10.730957 1437114 logs.go:282] 0 containers: []
	W1209 05:54:10.730980 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:54:10.730998 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:54:10.731088 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:54:10.757722 1437114 cri.go:89] found id: ""
	I1209 05:54:10.757753 1437114 logs.go:282] 0 containers: []
	W1209 05:54:10.757761 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:54:10.757771 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:54:10.757784 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:54:10.817777 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:54:10.817812 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:54:10.834055 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:54:10.834083 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:54:10.898677 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:54:10.890728    7018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:10.891591    7018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:10.893093    7018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:10.893520    7018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:10.894977    7018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:54:10.890728    7018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:10.891591    7018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:10.893093    7018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:10.893520    7018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:10.894977    7018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:54:10.898700 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:54:10.898713 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:54:10.923656 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:54:10.923690 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:54:13.467228 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:54:13.477812 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:54:13.477886 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:54:13.503323 1437114 cri.go:89] found id: ""
	I1209 05:54:13.503351 1437114 logs.go:282] 0 containers: []
	W1209 05:54:13.503360 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:54:13.503367 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:54:13.503441 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:54:13.538282 1437114 cri.go:89] found id: ""
	I1209 05:54:13.538310 1437114 logs.go:282] 0 containers: []
	W1209 05:54:13.538318 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:54:13.538324 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:54:13.538382 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:54:13.565556 1437114 cri.go:89] found id: ""
	I1209 05:54:13.565584 1437114 logs.go:282] 0 containers: []
	W1209 05:54:13.565594 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:54:13.565600 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:54:13.565659 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:54:13.594477 1437114 cri.go:89] found id: ""
	I1209 05:54:13.594499 1437114 logs.go:282] 0 containers: []
	W1209 05:54:13.594508 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:54:13.594514 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:54:13.594575 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:54:13.618630 1437114 cri.go:89] found id: ""
	I1209 05:54:13.618651 1437114 logs.go:282] 0 containers: []
	W1209 05:54:13.618658 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:54:13.618664 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:54:13.618720 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:54:13.643760 1437114 cri.go:89] found id: ""
	I1209 05:54:13.643786 1437114 logs.go:282] 0 containers: []
	W1209 05:54:13.643795 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:54:13.643801 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:54:13.643858 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:54:13.669716 1437114 cri.go:89] found id: ""
	I1209 05:54:13.669741 1437114 logs.go:282] 0 containers: []
	W1209 05:54:13.669749 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:54:13.669756 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:54:13.669848 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:54:13.693820 1437114 cri.go:89] found id: ""
	I1209 05:54:13.693847 1437114 logs.go:282] 0 containers: []
	W1209 05:54:13.693855 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:54:13.693864 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:54:13.693875 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:54:13.750893 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:54:13.750940 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:54:13.767174 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:54:13.767247 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:54:13.834450 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:54:13.823547    7135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:13.824086    7135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:13.828520    7135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:13.828897    7135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:13.830390    7135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:54:13.823547    7135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:13.824086    7135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:13.828520    7135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:13.828897    7135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:13.830390    7135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:54:13.834476 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:54:13.834491 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:54:13.860109 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:54:13.860148 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:54:16.386616 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:54:16.396767 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:54:16.396835 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:54:16.421557 1437114 cri.go:89] found id: ""
	I1209 05:54:16.421580 1437114 logs.go:282] 0 containers: []
	W1209 05:54:16.421589 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:54:16.421595 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:54:16.421655 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:54:16.462411 1437114 cri.go:89] found id: ""
	I1209 05:54:16.462432 1437114 logs.go:282] 0 containers: []
	W1209 05:54:16.462441 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:54:16.462447 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:54:16.462505 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:54:16.493789 1437114 cri.go:89] found id: ""
	I1209 05:54:16.493811 1437114 logs.go:282] 0 containers: []
	W1209 05:54:16.493819 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:54:16.493825 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:54:16.493887 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:54:16.523482 1437114 cri.go:89] found id: ""
	I1209 05:54:16.523504 1437114 logs.go:282] 0 containers: []
	W1209 05:54:16.523513 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:54:16.523519 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:54:16.523578 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:54:16.548318 1437114 cri.go:89] found id: ""
	I1209 05:54:16.548354 1437114 logs.go:282] 0 containers: []
	W1209 05:54:16.548363 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:54:16.548386 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:54:16.548471 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:54:16.573131 1437114 cri.go:89] found id: ""
	I1209 05:54:16.573158 1437114 logs.go:282] 0 containers: []
	W1209 05:54:16.573167 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:54:16.573173 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:54:16.573233 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:54:16.596652 1437114 cri.go:89] found id: ""
	I1209 05:54:16.596680 1437114 logs.go:282] 0 containers: []
	W1209 05:54:16.596689 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:54:16.596695 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:54:16.596754 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:54:16.622109 1437114 cri.go:89] found id: ""
	I1209 05:54:16.622131 1437114 logs.go:282] 0 containers: []
	W1209 05:54:16.622139 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:54:16.622148 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:54:16.622160 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:54:16.637977 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:54:16.638014 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:54:16.701887 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:54:16.693598    7245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:16.694125    7245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:16.695778    7245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:16.696319    7245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:16.697759    7245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:54:16.693598    7245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:16.694125    7245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:16.695778    7245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:16.696319    7245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:16.697759    7245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:54:16.701914 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:54:16.701927 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:54:16.728328 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:54:16.728362 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:54:16.756551 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:54:16.756581 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:54:19.313862 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:54:19.323798 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:54:19.323881 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:54:19.348899 1437114 cri.go:89] found id: ""
	I1209 05:54:19.348924 1437114 logs.go:282] 0 containers: []
	W1209 05:54:19.348932 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:54:19.348939 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:54:19.348996 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:54:19.373133 1437114 cri.go:89] found id: ""
	I1209 05:54:19.373156 1437114 logs.go:282] 0 containers: []
	W1209 05:54:19.373164 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:54:19.373170 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:54:19.373226 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:54:19.397615 1437114 cri.go:89] found id: ""
	I1209 05:54:19.397642 1437114 logs.go:282] 0 containers: []
	W1209 05:54:19.397651 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:54:19.397657 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:54:19.397716 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:54:19.426484 1437114 cri.go:89] found id: ""
	I1209 05:54:19.426505 1437114 logs.go:282] 0 containers: []
	W1209 05:54:19.426513 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:54:19.426519 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:54:19.426575 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:54:19.454826 1437114 cri.go:89] found id: ""
	I1209 05:54:19.454852 1437114 logs.go:282] 0 containers: []
	W1209 05:54:19.454868 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:54:19.454874 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:54:19.454941 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:54:19.483800 1437114 cri.go:89] found id: ""
	I1209 05:54:19.483821 1437114 logs.go:282] 0 containers: []
	W1209 05:54:19.483829 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:54:19.483835 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:54:19.483890 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:54:19.510301 1437114 cri.go:89] found id: ""
	I1209 05:54:19.510322 1437114 logs.go:282] 0 containers: []
	W1209 05:54:19.510330 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:54:19.510336 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:54:19.510392 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:54:19.533740 1437114 cri.go:89] found id: ""
	I1209 05:54:19.533766 1437114 logs.go:282] 0 containers: []
	W1209 05:54:19.533775 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:54:19.533785 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:54:19.533797 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:54:19.590533 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:54:19.590609 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:54:19.607749 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:54:19.607831 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:54:19.670098 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:54:19.662273    7359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:19.663063    7359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:19.664591    7359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:19.664886    7359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:19.666309    7359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:54:19.662273    7359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:19.663063    7359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:19.664591    7359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:19.664886    7359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:19.666309    7359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:54:19.670121 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:54:19.670135 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:54:19.696365 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:54:19.696401 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:54:22.225234 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:54:22.235522 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:54:22.235590 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:54:22.260044 1437114 cri.go:89] found id: ""
	I1209 05:54:22.260067 1437114 logs.go:282] 0 containers: []
	W1209 05:54:22.260076 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:54:22.260082 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:54:22.260141 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:54:22.283666 1437114 cri.go:89] found id: ""
	I1209 05:54:22.283694 1437114 logs.go:282] 0 containers: []
	W1209 05:54:22.283702 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:54:22.283708 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:54:22.283764 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:54:22.307779 1437114 cri.go:89] found id: ""
	I1209 05:54:22.307812 1437114 logs.go:282] 0 containers: []
	W1209 05:54:22.307821 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:54:22.307827 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:54:22.307884 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:54:22.333595 1437114 cri.go:89] found id: ""
	I1209 05:54:22.333621 1437114 logs.go:282] 0 containers: []
	W1209 05:54:22.333629 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:54:22.333635 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:54:22.333692 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:54:22.357452 1437114 cri.go:89] found id: ""
	I1209 05:54:22.357476 1437114 logs.go:282] 0 containers: []
	W1209 05:54:22.357484 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:54:22.357490 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:54:22.357551 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:54:22.382107 1437114 cri.go:89] found id: ""
	I1209 05:54:22.382170 1437114 logs.go:282] 0 containers: []
	W1209 05:54:22.382184 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:54:22.382192 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:54:22.382251 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:54:22.406738 1437114 cri.go:89] found id: ""
	I1209 05:54:22.406770 1437114 logs.go:282] 0 containers: []
	W1209 05:54:22.406780 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:54:22.406787 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:54:22.406858 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:54:22.432967 1437114 cri.go:89] found id: ""
	I1209 05:54:22.433002 1437114 logs.go:282] 0 containers: []
	W1209 05:54:22.433011 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:54:22.433020 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:54:22.433030 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:54:22.496308 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:54:22.496347 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:54:22.513215 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:54:22.513243 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:54:22.576557 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:54:22.568457    7472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:22.569106    7472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:22.570813    7472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:22.571288    7472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:22.572769    7472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:54:22.568457    7472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:22.569106    7472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:22.570813    7472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:22.571288    7472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:22.572769    7472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:54:22.576620 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:54:22.576641 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:54:22.601775 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:54:22.601808 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:54:25.129209 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:54:25.140801 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:54:25.140875 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:54:25.167673 1437114 cri.go:89] found id: ""
	I1209 05:54:25.167699 1437114 logs.go:282] 0 containers: []
	W1209 05:54:25.167708 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:54:25.167714 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:54:25.167774 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:54:25.213289 1437114 cri.go:89] found id: ""
	I1209 05:54:25.213317 1437114 logs.go:282] 0 containers: []
	W1209 05:54:25.213326 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:54:25.213332 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:54:25.213394 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:54:25.251150 1437114 cri.go:89] found id: ""
	I1209 05:54:25.251173 1437114 logs.go:282] 0 containers: []
	W1209 05:54:25.251181 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:54:25.251187 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:54:25.251251 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:54:25.278324 1437114 cri.go:89] found id: ""
	I1209 05:54:25.278347 1437114 logs.go:282] 0 containers: []
	W1209 05:54:25.278355 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:54:25.278361 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:54:25.278426 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:54:25.305947 1437114 cri.go:89] found id: ""
	I1209 05:54:25.305968 1437114 logs.go:282] 0 containers: []
	W1209 05:54:25.305976 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:54:25.305982 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:54:25.306043 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:54:25.330741 1437114 cri.go:89] found id: ""
	I1209 05:54:25.330766 1437114 logs.go:282] 0 containers: []
	W1209 05:54:25.330774 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:54:25.330780 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:54:25.330842 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:54:25.357251 1437114 cri.go:89] found id: ""
	I1209 05:54:25.357289 1437114 logs.go:282] 0 containers: []
	W1209 05:54:25.357297 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:54:25.357303 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:54:25.357361 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:54:25.381550 1437114 cri.go:89] found id: ""
	I1209 05:54:25.381574 1437114 logs.go:282] 0 containers: []
	W1209 05:54:25.381582 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:54:25.381643 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:54:25.381661 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:54:25.407792 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:54:25.407826 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:54:25.444380 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:54:25.444411 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:54:25.508703 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:54:25.508739 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:54:25.525308 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:54:25.525335 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:54:25.590403 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:54:25.582560    7600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:25.583141    7600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:25.584775    7600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:25.585120    7600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:25.586571    7600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:54:25.582560    7600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:25.583141    7600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:25.584775    7600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:25.585120    7600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:25.586571    7600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:54:28.090673 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:54:28.101806 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:54:28.101927 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:54:28.126175 1437114 cri.go:89] found id: ""
	I1209 05:54:28.126210 1437114 logs.go:282] 0 containers: []
	W1209 05:54:28.126219 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:54:28.126225 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:54:28.126302 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:54:28.154842 1437114 cri.go:89] found id: ""
	I1209 05:54:28.154863 1437114 logs.go:282] 0 containers: []
	W1209 05:54:28.154872 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:54:28.154878 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:54:28.154936 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:54:28.181513 1437114 cri.go:89] found id: ""
	I1209 05:54:28.181536 1437114 logs.go:282] 0 containers: []
	W1209 05:54:28.181543 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:54:28.181550 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:54:28.181606 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:54:28.208958 1437114 cri.go:89] found id: ""
	I1209 05:54:28.208979 1437114 logs.go:282] 0 containers: []
	W1209 05:54:28.208987 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:54:28.208993 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:54:28.209051 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:54:28.236261 1437114 cri.go:89] found id: ""
	I1209 05:54:28.236288 1437114 logs.go:282] 0 containers: []
	W1209 05:54:28.236296 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:54:28.236308 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:54:28.236365 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:54:28.264550 1437114 cri.go:89] found id: ""
	I1209 05:54:28.264573 1437114 logs.go:282] 0 containers: []
	W1209 05:54:28.264582 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:54:28.264588 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:54:28.264645 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:54:28.288754 1437114 cri.go:89] found id: ""
	I1209 05:54:28.288779 1437114 logs.go:282] 0 containers: []
	W1209 05:54:28.288787 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:54:28.288805 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:54:28.288865 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:54:28.311894 1437114 cri.go:89] found id: ""
	I1209 05:54:28.311922 1437114 logs.go:282] 0 containers: []
	W1209 05:54:28.311931 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:54:28.311941 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:54:28.311952 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:54:28.368882 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:54:28.368916 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:54:28.385073 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:54:28.385102 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:54:28.453852 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:54:28.445585    7699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:28.446317    7699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:28.447999    7699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:28.448560    7699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:28.449990    7699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:54:28.445585    7699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:28.446317    7699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:28.447999    7699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:28.448560    7699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:28.449990    7699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:54:28.453912 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:54:28.453948 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:54:28.481464 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:54:28.481542 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:54:31.017971 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:54:31.028776 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:54:31.028848 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:54:31.059955 1437114 cri.go:89] found id: ""
	I1209 05:54:31.059979 1437114 logs.go:282] 0 containers: []
	W1209 05:54:31.059988 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:54:31.059995 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:54:31.060087 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:54:31.085360 1437114 cri.go:89] found id: ""
	I1209 05:54:31.085389 1437114 logs.go:282] 0 containers: []
	W1209 05:54:31.085398 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:54:31.085404 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:54:31.085466 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:54:31.112050 1437114 cri.go:89] found id: ""
	I1209 05:54:31.112083 1437114 logs.go:282] 0 containers: []
	W1209 05:54:31.112092 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:54:31.112100 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:54:31.112170 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:54:31.139102 1437114 cri.go:89] found id: ""
	I1209 05:54:31.139138 1437114 logs.go:282] 0 containers: []
	W1209 05:54:31.139147 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:54:31.139153 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:54:31.139223 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:54:31.166677 1437114 cri.go:89] found id: ""
	I1209 05:54:31.166710 1437114 logs.go:282] 0 containers: []
	W1209 05:54:31.166720 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:54:31.166727 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:54:31.166818 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:54:31.204582 1437114 cri.go:89] found id: ""
	I1209 05:54:31.204610 1437114 logs.go:282] 0 containers: []
	W1209 05:54:31.204619 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:54:31.204626 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:54:31.204693 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:54:31.242874 1437114 cri.go:89] found id: ""
	I1209 05:54:31.242900 1437114 logs.go:282] 0 containers: []
	W1209 05:54:31.242909 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:54:31.242916 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:54:31.242991 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:54:31.268196 1437114 cri.go:89] found id: ""
	I1209 05:54:31.268225 1437114 logs.go:282] 0 containers: []
	W1209 05:54:31.268234 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:54:31.268243 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:54:31.268254 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:54:31.293521 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:54:31.293559 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:54:31.321144 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:54:31.321175 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:54:31.378617 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:54:31.378656 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:54:31.394506 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:54:31.394533 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:54:31.467240 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:54:31.458393    7825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:31.459167    7825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:31.460831    7825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:31.461408    7825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:31.463045    7825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:54:31.458393    7825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:31.459167    7825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:31.460831    7825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:31.461408    7825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:31.463045    7825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:54:33.967506 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:54:33.977826 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:54:33.977902 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:54:34.002325 1437114 cri.go:89] found id: ""
	I1209 05:54:34.002351 1437114 logs.go:282] 0 containers: []
	W1209 05:54:34.002360 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:54:34.002367 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:54:34.002443 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:54:34.029888 1437114 cri.go:89] found id: ""
	I1209 05:54:34.029919 1437114 logs.go:282] 0 containers: []
	W1209 05:54:34.029928 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:54:34.029935 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:54:34.029996 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:54:34.058673 1437114 cri.go:89] found id: ""
	I1209 05:54:34.058698 1437114 logs.go:282] 0 containers: []
	W1209 05:54:34.058706 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:54:34.058712 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:54:34.058783 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:54:34.083346 1437114 cri.go:89] found id: ""
	I1209 05:54:34.083370 1437114 logs.go:282] 0 containers: []
	W1209 05:54:34.083379 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:54:34.083385 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:54:34.083453 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:54:34.108098 1437114 cri.go:89] found id: ""
	I1209 05:54:34.108126 1437114 logs.go:282] 0 containers: []
	W1209 05:54:34.108135 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:54:34.108141 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:54:34.108227 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:54:34.133779 1437114 cri.go:89] found id: ""
	I1209 05:54:34.133803 1437114 logs.go:282] 0 containers: []
	W1209 05:54:34.133812 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:54:34.133819 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:54:34.133877 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:54:34.161528 1437114 cri.go:89] found id: ""
	I1209 05:54:34.161607 1437114 logs.go:282] 0 containers: []
	W1209 05:54:34.161639 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:54:34.161662 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:54:34.161779 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:54:34.191325 1437114 cri.go:89] found id: ""
	I1209 05:54:34.191400 1437114 logs.go:282] 0 containers: []
	W1209 05:54:34.191423 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:54:34.191443 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:54:34.191493 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:54:34.258939 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:54:34.258977 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:54:34.275607 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:54:34.275640 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:54:34.346638 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:54:34.338621    7924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:34.339268    7924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:34.340363    7924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:34.340982    7924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:34.342615    7924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:54:34.338621    7924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:34.339268    7924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:34.340363    7924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:34.340982    7924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:34.342615    7924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:54:34.346709 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:54:34.346754 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:54:34.373053 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:54:34.373092 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:54:36.904183 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:54:36.914625 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:54:36.914703 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:54:36.939165 1437114 cri.go:89] found id: ""
	I1209 05:54:36.939204 1437114 logs.go:282] 0 containers: []
	W1209 05:54:36.939213 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:54:36.939220 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:54:36.939280 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:54:36.968277 1437114 cri.go:89] found id: ""
	I1209 05:54:36.968303 1437114 logs.go:282] 0 containers: []
	W1209 05:54:36.968312 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:54:36.968319 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:54:36.968379 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:54:36.993837 1437114 cri.go:89] found id: ""
	I1209 05:54:36.993866 1437114 logs.go:282] 0 containers: []
	W1209 05:54:36.993875 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:54:36.993882 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:54:36.993939 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:54:37.029321 1437114 cri.go:89] found id: ""
	I1209 05:54:37.029358 1437114 logs.go:282] 0 containers: []
	W1209 05:54:37.029370 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:54:37.029381 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:54:37.029479 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:54:37.060208 1437114 cri.go:89] found id: ""
	I1209 05:54:37.060235 1437114 logs.go:282] 0 containers: []
	W1209 05:54:37.060244 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:54:37.060251 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:54:37.060311 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:54:37.085969 1437114 cri.go:89] found id: ""
	I1209 05:54:37.085992 1437114 logs.go:282] 0 containers: []
	W1209 05:54:37.086001 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:54:37.086007 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:54:37.086066 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:54:37.114324 1437114 cri.go:89] found id: ""
	I1209 05:54:37.114357 1437114 logs.go:282] 0 containers: []
	W1209 05:54:37.114367 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:54:37.114373 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:54:37.114478 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:54:37.143312 1437114 cri.go:89] found id: ""
	I1209 05:54:37.143339 1437114 logs.go:282] 0 containers: []
	W1209 05:54:37.143348 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:54:37.143357 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:54:37.143369 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:54:37.234893 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:54:37.226773    8026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:37.227615    8026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:37.228809    8026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:37.229450    8026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:37.231054    8026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:54:37.226773    8026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:37.227615    8026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:37.228809    8026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:37.229450    8026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:37.231054    8026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:54:37.234921 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:54:37.234933 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:54:37.262601 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:54:37.262635 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:54:37.289433 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:54:37.289458 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:54:37.345400 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:54:37.345435 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:54:39.861840 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:54:39.873772 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:54:39.873850 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:54:39.901691 1437114 cri.go:89] found id: ""
	I1209 05:54:39.901714 1437114 logs.go:282] 0 containers: []
	W1209 05:54:39.901725 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:54:39.901731 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:54:39.901793 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:54:39.925900 1437114 cri.go:89] found id: ""
	I1209 05:54:39.925935 1437114 logs.go:282] 0 containers: []
	W1209 05:54:39.925944 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:54:39.925950 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:54:39.926009 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:54:39.951997 1437114 cri.go:89] found id: ""
	I1209 05:54:39.952041 1437114 logs.go:282] 0 containers: []
	W1209 05:54:39.952050 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:54:39.952056 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:54:39.952116 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:54:39.980156 1437114 cri.go:89] found id: ""
	I1209 05:54:39.980182 1437114 logs.go:282] 0 containers: []
	W1209 05:54:39.980190 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:54:39.980196 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:54:39.980255 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:54:40.007109 1437114 cri.go:89] found id: ""
	I1209 05:54:40.007136 1437114 logs.go:282] 0 containers: []
	W1209 05:54:40.007146 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:54:40.007154 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:54:40.007234 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:54:40.056170 1437114 cri.go:89] found id: ""
	I1209 05:54:40.056197 1437114 logs.go:282] 0 containers: []
	W1209 05:54:40.056207 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:54:40.056214 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:54:40.056298 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:54:40.085850 1437114 cri.go:89] found id: ""
	I1209 05:54:40.085879 1437114 logs.go:282] 0 containers: []
	W1209 05:54:40.085888 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:54:40.085894 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:54:40.085960 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:54:40.118208 1437114 cri.go:89] found id: ""
	I1209 05:54:40.118245 1437114 logs.go:282] 0 containers: []
	W1209 05:54:40.118256 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:54:40.118267 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:54:40.118281 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:54:40.195166 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:54:40.184383    8140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:40.185244    8140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:40.187445    8140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:40.188458    8140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:40.189404    8140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:54:40.184383    8140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:40.185244    8140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:40.187445    8140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:40.188458    8140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:40.189404    8140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:54:40.195189 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:54:40.195203 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:54:40.223567 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:54:40.223651 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:54:40.266759 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:54:40.266786 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:54:40.323783 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:54:40.323818 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:54:42.842021 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:54:42.852681 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:54:42.852755 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:54:42.876598 1437114 cri.go:89] found id: ""
	I1209 05:54:42.876622 1437114 logs.go:282] 0 containers: []
	W1209 05:54:42.876631 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:54:42.876637 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:54:42.876694 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:54:42.901491 1437114 cri.go:89] found id: ""
	I1209 05:54:42.901515 1437114 logs.go:282] 0 containers: []
	W1209 05:54:42.901523 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:54:42.901529 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:54:42.901588 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:54:42.930050 1437114 cri.go:89] found id: ""
	I1209 05:54:42.930077 1437114 logs.go:282] 0 containers: []
	W1209 05:54:42.930086 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:54:42.930093 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:54:42.930151 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:54:42.953794 1437114 cri.go:89] found id: ""
	I1209 05:54:42.953817 1437114 logs.go:282] 0 containers: []
	W1209 05:54:42.953825 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:54:42.953837 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:54:42.953940 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:54:42.977300 1437114 cri.go:89] found id: ""
	I1209 05:54:42.977324 1437114 logs.go:282] 0 containers: []
	W1209 05:54:42.977333 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:54:42.977339 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:54:42.977416 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:54:43.001015 1437114 cri.go:89] found id: ""
	I1209 05:54:43.001080 1437114 logs.go:282] 0 containers: []
	W1209 05:54:43.001095 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:54:43.001103 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:54:43.001169 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:54:43.026886 1437114 cri.go:89] found id: ""
	I1209 05:54:43.026910 1437114 logs.go:282] 0 containers: []
	W1209 05:54:43.026918 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:54:43.026925 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:54:43.026984 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:54:43.057227 1437114 cri.go:89] found id: ""
	I1209 05:54:43.057253 1437114 logs.go:282] 0 containers: []
	W1209 05:54:43.057271 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:54:43.057281 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:54:43.057293 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:54:43.115319 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:54:43.115357 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:54:43.131310 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:54:43.131346 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:54:43.204953 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:54:43.196603    8257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:43.197525    8257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:43.199091    8257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:43.199623    8257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:43.201121    8257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:54:43.196603    8257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:43.197525    8257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:43.199091    8257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:43.199623    8257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:43.201121    8257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:54:43.204975 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:54:43.204987 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:54:43.231713 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:54:43.231747 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:54:45.766147 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:54:45.776210 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:54:45.776285 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:54:45.804782 1437114 cri.go:89] found id: ""
	I1209 05:54:45.804810 1437114 logs.go:282] 0 containers: []
	W1209 05:54:45.804857 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:54:45.804871 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:54:45.804939 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:54:45.828660 1437114 cri.go:89] found id: ""
	I1209 05:54:45.828684 1437114 logs.go:282] 0 containers: []
	W1209 05:54:45.828692 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:54:45.828698 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:54:45.828758 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:54:45.853575 1437114 cri.go:89] found id: ""
	I1209 05:54:45.853598 1437114 logs.go:282] 0 containers: []
	W1209 05:54:45.853606 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:54:45.853612 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:54:45.853667 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:54:45.877674 1437114 cri.go:89] found id: ""
	I1209 05:54:45.877697 1437114 logs.go:282] 0 containers: []
	W1209 05:54:45.877705 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:54:45.877711 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:54:45.877775 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:54:45.902246 1437114 cri.go:89] found id: ""
	I1209 05:54:45.902270 1437114 logs.go:282] 0 containers: []
	W1209 05:54:45.902284 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:54:45.902291 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:54:45.902347 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:54:45.929443 1437114 cri.go:89] found id: ""
	I1209 05:54:45.929517 1437114 logs.go:282] 0 containers: []
	W1209 05:54:45.929532 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:54:45.929539 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:54:45.929596 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:54:45.955032 1437114 cri.go:89] found id: ""
	I1209 05:54:45.955065 1437114 logs.go:282] 0 containers: []
	W1209 05:54:45.955074 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:54:45.955081 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:54:45.955147 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:54:45.983502 1437114 cri.go:89] found id: ""
	I1209 05:54:45.983527 1437114 logs.go:282] 0 containers: []
	W1209 05:54:45.983535 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:54:45.983544 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:54:45.983555 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:54:46.049253 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:54:46.049292 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:54:46.066199 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:54:46.066229 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:54:46.133498 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:54:46.124747    8372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:46.125334    8372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:46.126986    8372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:46.127505    8372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:46.129096    8372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:54:46.124747    8372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:46.125334    8372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:46.126986    8372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:46.127505    8372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:46.129096    8372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:54:46.133521 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:54:46.133534 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:54:46.159468 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:54:46.159500 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:54:48.698046 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:54:48.710430 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:54:48.710504 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:54:48.739692 1437114 cri.go:89] found id: ""
	I1209 05:54:48.739718 1437114 logs.go:282] 0 containers: []
	W1209 05:54:48.739726 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:54:48.739733 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:54:48.739790 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:54:48.764166 1437114 cri.go:89] found id: ""
	I1209 05:54:48.764192 1437114 logs.go:282] 0 containers: []
	W1209 05:54:48.764200 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:54:48.764206 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:54:48.764264 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:54:48.788074 1437114 cri.go:89] found id: ""
	I1209 05:54:48.788097 1437114 logs.go:282] 0 containers: []
	W1209 05:54:48.788114 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:54:48.788122 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:54:48.788189 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:54:48.813373 1437114 cri.go:89] found id: ""
	I1209 05:54:48.813398 1437114 logs.go:282] 0 containers: []
	W1209 05:54:48.813407 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:54:48.813414 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:54:48.813472 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:54:48.840222 1437114 cri.go:89] found id: ""
	I1209 05:54:48.840248 1437114 logs.go:282] 0 containers: []
	W1209 05:54:48.840256 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:54:48.840270 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:54:48.840331 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:54:48.869002 1437114 cri.go:89] found id: ""
	I1209 05:54:48.869025 1437114 logs.go:282] 0 containers: []
	W1209 05:54:48.869034 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:54:48.869041 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:54:48.869098 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:54:48.897074 1437114 cri.go:89] found id: ""
	I1209 05:54:48.897100 1437114 logs.go:282] 0 containers: []
	W1209 05:54:48.897108 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:54:48.897115 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:54:48.897193 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:54:48.920534 1437114 cri.go:89] found id: ""
	I1209 05:54:48.920559 1437114 logs.go:282] 0 containers: []
	W1209 05:54:48.920567 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:54:48.920576 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:54:48.920588 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:54:48.976882 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:54:48.976918 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:54:48.992754 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:54:48.992782 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:54:49.058058 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:54:49.049574    8487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:49.050149    8487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:49.051765    8487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:49.052269    8487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:49.053870    8487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:54:49.049574    8487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:49.050149    8487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:49.051765    8487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:49.052269    8487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:49.053870    8487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:54:49.058079 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:54:49.058092 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:54:49.083543 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:54:49.083578 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:54:51.613470 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:54:51.625228 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:54:51.625329 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:54:51.651832 1437114 cri.go:89] found id: ""
	I1209 05:54:51.651863 1437114 logs.go:282] 0 containers: []
	W1209 05:54:51.651871 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:54:51.651878 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:54:51.651989 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:54:51.689430 1437114 cri.go:89] found id: ""
	I1209 05:54:51.689471 1437114 logs.go:282] 0 containers: []
	W1209 05:54:51.689480 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:54:51.689486 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:54:51.689556 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:54:51.718333 1437114 cri.go:89] found id: ""
	I1209 05:54:51.718377 1437114 logs.go:282] 0 containers: []
	W1209 05:54:51.718387 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:54:51.718394 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:54:51.718468 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:54:51.748566 1437114 cri.go:89] found id: ""
	I1209 05:54:51.748641 1437114 logs.go:282] 0 containers: []
	W1209 05:54:51.748656 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:54:51.748663 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:54:51.748732 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:54:51.773149 1437114 cri.go:89] found id: ""
	I1209 05:54:51.773175 1437114 logs.go:282] 0 containers: []
	W1209 05:54:51.773184 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:54:51.773191 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:54:51.773283 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:54:51.802227 1437114 cri.go:89] found id: ""
	I1209 05:54:51.802253 1437114 logs.go:282] 0 containers: []
	W1209 05:54:51.802262 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:54:51.802272 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:54:51.802351 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:54:51.833926 1437114 cri.go:89] found id: ""
	I1209 05:54:51.833994 1437114 logs.go:282] 0 containers: []
	W1209 05:54:51.834016 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:54:51.834036 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:54:51.834126 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:54:51.859887 1437114 cri.go:89] found id: ""
	I1209 05:54:51.859919 1437114 logs.go:282] 0 containers: []
	W1209 05:54:51.859927 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:54:51.859937 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:54:51.859948 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:54:51.876110 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:54:51.876138 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:54:51.942848 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:54:51.934424    8598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:51.935014    8598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:51.936468    8598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:51.937091    8598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:51.938535    8598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:54:51.934424    8598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:51.935014    8598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:51.936468    8598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:51.937091    8598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:51.938535    8598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:54:51.942870 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:54:51.942883 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:54:51.968433 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:54:51.968466 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:54:51.996383 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:54:51.996421 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:54:54.554719 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:54:54.565346 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:54:54.565415 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:54:54.593426 1437114 cri.go:89] found id: ""
	I1209 05:54:54.593450 1437114 logs.go:282] 0 containers: []
	W1209 05:54:54.593458 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:54:54.593464 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:54:54.593522 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:54:54.621281 1437114 cri.go:89] found id: ""
	I1209 05:54:54.621304 1437114 logs.go:282] 0 containers: []
	W1209 05:54:54.621312 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:54:54.621318 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:54:54.621376 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:54:54.646126 1437114 cri.go:89] found id: ""
	I1209 05:54:54.646194 1437114 logs.go:282] 0 containers: []
	W1209 05:54:54.646216 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:54:54.646234 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:54:54.646318 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:54:54.674944 1437114 cri.go:89] found id: ""
	I1209 05:54:54.674986 1437114 logs.go:282] 0 containers: []
	W1209 05:54:54.675011 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:54:54.675029 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:54:54.675110 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:54:54.700733 1437114 cri.go:89] found id: ""
	I1209 05:54:54.700766 1437114 logs.go:282] 0 containers: []
	W1209 05:54:54.700775 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:54:54.700781 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:54:54.700860 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:54:54.733555 1437114 cri.go:89] found id: ""
	I1209 05:54:54.733634 1437114 logs.go:282] 0 containers: []
	W1209 05:54:54.733656 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:54:54.733676 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:54:54.733777 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:54:54.759852 1437114 cri.go:89] found id: ""
	I1209 05:54:54.759926 1437114 logs.go:282] 0 containers: []
	W1209 05:54:54.759949 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:54:54.759972 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:54:54.760110 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:54:54.784303 1437114 cri.go:89] found id: ""
	I1209 05:54:54.784377 1437114 logs.go:282] 0 containers: []
	W1209 05:54:54.784392 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:54:54.784402 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:54:54.784413 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:54:54.809753 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:54:54.809790 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:54:54.836589 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:54:54.836617 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:54:54.899737 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:54:54.899784 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:54:54.915785 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:54:54.915814 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:54:54.979896 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:54:54.971488    8724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:54.971906    8724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:54.973479    8724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:54.974140    8724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:54.976063    8724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:54:54.971488    8724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:54.971906    8724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:54.973479    8724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:54.974140    8724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:54.976063    8724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:54:57.480193 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:54:57.491395 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:54:57.491473 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:54:57.518091 1437114 cri.go:89] found id: ""
	I1209 05:54:57.518114 1437114 logs.go:282] 0 containers: []
	W1209 05:54:57.518123 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:54:57.518130 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:54:57.518191 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:54:57.545921 1437114 cri.go:89] found id: ""
	I1209 05:54:57.545954 1437114 logs.go:282] 0 containers: []
	W1209 05:54:57.545962 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:54:57.545969 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:54:57.546037 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:54:57.570249 1437114 cri.go:89] found id: ""
	I1209 05:54:57.570280 1437114 logs.go:282] 0 containers: []
	W1209 05:54:57.570290 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:54:57.570296 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:54:57.570367 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:54:57.597541 1437114 cri.go:89] found id: ""
	I1209 05:54:57.597565 1437114 logs.go:282] 0 containers: []
	W1209 05:54:57.597576 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:54:57.597583 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:54:57.597639 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:54:57.625351 1437114 cri.go:89] found id: ""
	I1209 05:54:57.625374 1437114 logs.go:282] 0 containers: []
	W1209 05:54:57.625382 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:54:57.625388 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:54:57.625446 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:54:57.653430 1437114 cri.go:89] found id: ""
	I1209 05:54:57.653504 1437114 logs.go:282] 0 containers: []
	W1209 05:54:57.653520 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:54:57.653528 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:54:57.653592 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:54:57.686655 1437114 cri.go:89] found id: ""
	I1209 05:54:57.686681 1437114 logs.go:282] 0 containers: []
	W1209 05:54:57.686704 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:54:57.686711 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:54:57.686783 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:54:57.715897 1437114 cri.go:89] found id: ""
	I1209 05:54:57.715924 1437114 logs.go:282] 0 containers: []
	W1209 05:54:57.715932 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:54:57.715941 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:54:57.715952 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:54:57.781835 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:54:57.781871 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:54:57.798499 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:54:57.798527 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:54:57.870136 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:54:57.856259    8825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:57.861442    8825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:57.864278    8825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:57.864723    8825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:57.866272    8825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:54:57.856259    8825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:57.861442    8825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:57.864278    8825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:57.864723    8825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:54:57.866272    8825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:54:57.870169 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:54:57.870182 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:54:57.894760 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:54:57.894794 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:55:00.423491 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:55:00.436333 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:55:00.436416 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:55:00.477329 1437114 cri.go:89] found id: ""
	I1209 05:55:00.477357 1437114 logs.go:282] 0 containers: []
	W1209 05:55:00.477367 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:55:00.477373 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:55:00.477440 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:55:00.510439 1437114 cri.go:89] found id: ""
	I1209 05:55:00.510467 1437114 logs.go:282] 0 containers: []
	W1209 05:55:00.510477 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:55:00.510483 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:55:00.510565 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:55:00.539373 1437114 cri.go:89] found id: ""
	I1209 05:55:00.539404 1437114 logs.go:282] 0 containers: []
	W1209 05:55:00.539413 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:55:00.539420 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:55:00.539484 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:55:00.567440 1437114 cri.go:89] found id: ""
	I1209 05:55:00.567470 1437114 logs.go:282] 0 containers: []
	W1209 05:55:00.567479 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:55:00.567486 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:55:00.567547 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:55:00.603417 1437114 cri.go:89] found id: ""
	I1209 05:55:00.603442 1437114 logs.go:282] 0 containers: []
	W1209 05:55:00.603450 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:55:00.603456 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:55:00.603515 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:55:00.628877 1437114 cri.go:89] found id: ""
	I1209 05:55:00.628900 1437114 logs.go:282] 0 containers: []
	W1209 05:55:00.628909 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:55:00.628915 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:55:00.628972 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:55:00.657533 1437114 cri.go:89] found id: ""
	I1209 05:55:00.657562 1437114 logs.go:282] 0 containers: []
	W1209 05:55:00.657571 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:55:00.657578 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:55:00.657638 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:55:00.686066 1437114 cri.go:89] found id: ""
	I1209 05:55:00.686090 1437114 logs.go:282] 0 containers: []
	W1209 05:55:00.686099 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:55:00.686108 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:55:00.686120 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:55:00.708508 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:55:00.708588 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:55:00.777301 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:55:00.768863    8933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:00.769274    8933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:00.770892    8933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:00.771415    8933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:00.772464    8933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:55:00.768863    8933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:00.769274    8933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:00.770892    8933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:00.771415    8933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:00.772464    8933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:55:00.777372 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:55:00.777394 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:55:00.802304 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:55:00.802337 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:55:00.829410 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:55:00.829436 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:55:03.385877 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:55:03.396171 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:55:03.396238 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:55:03.420742 1437114 cri.go:89] found id: ""
	I1209 05:55:03.420767 1437114 logs.go:282] 0 containers: []
	W1209 05:55:03.420775 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:55:03.420781 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:55:03.420837 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:55:03.458835 1437114 cri.go:89] found id: ""
	I1209 05:55:03.458861 1437114 logs.go:282] 0 containers: []
	W1209 05:55:03.458869 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:55:03.458876 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:55:03.458934 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:55:03.488300 1437114 cri.go:89] found id: ""
	I1209 05:55:03.488326 1437114 logs.go:282] 0 containers: []
	W1209 05:55:03.488334 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:55:03.488340 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:55:03.488400 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:55:03.516405 1437114 cri.go:89] found id: ""
	I1209 05:55:03.516432 1437114 logs.go:282] 0 containers: []
	W1209 05:55:03.516440 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:55:03.516446 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:55:03.516506 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:55:03.545401 1437114 cri.go:89] found id: ""
	I1209 05:55:03.545467 1437114 logs.go:282] 0 containers: []
	W1209 05:55:03.545492 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:55:03.545510 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:55:03.545597 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:55:03.570243 1437114 cri.go:89] found id: ""
	I1209 05:55:03.570316 1437114 logs.go:282] 0 containers: []
	W1209 05:55:03.570342 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:55:03.570357 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:55:03.570449 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:55:03.594930 1437114 cri.go:89] found id: ""
	I1209 05:55:03.594955 1437114 logs.go:282] 0 containers: []
	W1209 05:55:03.594965 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:55:03.594971 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:55:03.595030 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:55:03.619052 1437114 cri.go:89] found id: ""
	I1209 05:55:03.619080 1437114 logs.go:282] 0 containers: []
	W1209 05:55:03.619089 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:55:03.619098 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:55:03.619114 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:55:03.676980 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:55:03.677019 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:55:03.697398 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:55:03.697427 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:55:03.769575 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:55:03.761997    9046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:03.762424    9046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:03.763695    9046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:03.764060    9046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:03.765630    9046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:55:03.761997    9046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:03.762424    9046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:03.763695    9046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:03.764060    9046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:03.765630    9046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:55:03.769607 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:55:03.769620 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:55:03.794589 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:55:03.794623 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:55:06.321615 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:55:06.331929 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:55:06.331999 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:55:06.358377 1437114 cri.go:89] found id: ""
	I1209 05:55:06.358403 1437114 logs.go:282] 0 containers: []
	W1209 05:55:06.358411 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:55:06.358418 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:55:06.358481 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:55:06.384508 1437114 cri.go:89] found id: ""
	I1209 05:55:06.384533 1437114 logs.go:282] 0 containers: []
	W1209 05:55:06.384542 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:55:06.384548 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:55:06.384607 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:55:06.408779 1437114 cri.go:89] found id: ""
	I1209 05:55:06.408801 1437114 logs.go:282] 0 containers: []
	W1209 05:55:06.408810 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:55:06.408816 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:55:06.408874 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:55:06.441591 1437114 cri.go:89] found id: ""
	I1209 05:55:06.441613 1437114 logs.go:282] 0 containers: []
	W1209 05:55:06.441622 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:55:06.441628 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:55:06.441689 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:55:06.474533 1437114 cri.go:89] found id: ""
	I1209 05:55:06.474555 1437114 logs.go:282] 0 containers: []
	W1209 05:55:06.474567 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:55:06.474574 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:55:06.474706 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:55:06.503583 1437114 cri.go:89] found id: ""
	I1209 05:55:06.503655 1437114 logs.go:282] 0 containers: []
	W1209 05:55:06.503677 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:55:06.503697 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:55:06.503785 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:55:06.529409 1437114 cri.go:89] found id: ""
	I1209 05:55:06.529434 1437114 logs.go:282] 0 containers: []
	W1209 05:55:06.529443 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:55:06.529449 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:55:06.529508 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:55:06.559184 1437114 cri.go:89] found id: ""
	I1209 05:55:06.559254 1437114 logs.go:282] 0 containers: []
	W1209 05:55:06.559289 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:55:06.559317 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:55:06.559341 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:55:06.616116 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:55:06.616152 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:55:06.632189 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:55:06.632218 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:55:06.703879 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:55:06.694883    9157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:06.695859    9157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:06.697486    9157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:06.698063    9157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:06.699592    9157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:55:06.694883    9157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:06.695859    9157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:06.697486    9157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:06.698063    9157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:06.699592    9157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:55:06.703908 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:55:06.703924 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:55:06.733107 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:55:06.733166 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:55:09.268085 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:55:09.278413 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:55:09.278488 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:55:09.301738 1437114 cri.go:89] found id: ""
	I1209 05:55:09.301764 1437114 logs.go:282] 0 containers: []
	W1209 05:55:09.301773 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:55:09.301779 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:55:09.301836 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:55:09.329939 1437114 cri.go:89] found id: ""
	I1209 05:55:09.329962 1437114 logs.go:282] 0 containers: []
	W1209 05:55:09.329970 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:55:09.329976 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:55:09.330032 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:55:09.358792 1437114 cri.go:89] found id: ""
	I1209 05:55:09.358825 1437114 logs.go:282] 0 containers: []
	W1209 05:55:09.358834 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:55:09.358840 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:55:09.358934 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:55:09.383783 1437114 cri.go:89] found id: ""
	I1209 05:55:09.383806 1437114 logs.go:282] 0 containers: []
	W1209 05:55:09.383814 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:55:09.383820 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:55:09.383881 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:55:09.409956 1437114 cri.go:89] found id: ""
	I1209 05:55:09.409982 1437114 logs.go:282] 0 containers: []
	W1209 05:55:09.409990 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:55:09.409997 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:55:09.410054 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:55:09.442388 1437114 cri.go:89] found id: ""
	I1209 05:55:09.442471 1437114 logs.go:282] 0 containers: []
	W1209 05:55:09.442502 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:55:09.442524 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:55:09.442611 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:55:09.472213 1437114 cri.go:89] found id: ""
	I1209 05:55:09.472234 1437114 logs.go:282] 0 containers: []
	W1209 05:55:09.472243 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:55:09.472249 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:55:09.472306 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:55:09.500348 1437114 cri.go:89] found id: ""
	I1209 05:55:09.500372 1437114 logs.go:282] 0 containers: []
	W1209 05:55:09.500381 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:55:09.500390 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:55:09.500401 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:55:09.556960 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:55:09.556998 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:55:09.573143 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:55:09.573173 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:55:09.641645 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:55:09.634078    9266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:09.634591    9266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:09.636259    9266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:09.636782    9266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:09.637775    9266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:55:09.634078    9266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:09.634591    9266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:09.636259    9266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:09.636782    9266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:09.637775    9266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:55:09.641669 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:55:09.641682 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:55:09.667979 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:55:09.668100 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:55:12.205096 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:55:12.215660 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:55:12.215729 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:55:12.239566 1437114 cri.go:89] found id: ""
	I1209 05:55:12.239594 1437114 logs.go:282] 0 containers: []
	W1209 05:55:12.239603 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:55:12.239609 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:55:12.239668 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:55:12.267891 1437114 cri.go:89] found id: ""
	I1209 05:55:12.267914 1437114 logs.go:282] 0 containers: []
	W1209 05:55:12.267924 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:55:12.267930 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:55:12.267992 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:55:12.296494 1437114 cri.go:89] found id: ""
	I1209 05:55:12.296523 1437114 logs.go:282] 0 containers: []
	W1209 05:55:12.296532 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:55:12.296539 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:55:12.296602 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:55:12.322105 1437114 cri.go:89] found id: ""
	I1209 05:55:12.322135 1437114 logs.go:282] 0 containers: []
	W1209 05:55:12.322144 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:55:12.322151 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:55:12.322208 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:55:12.347978 1437114 cri.go:89] found id: ""
	I1209 05:55:12.348001 1437114 logs.go:282] 0 containers: []
	W1209 05:55:12.348010 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:55:12.348038 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:55:12.348096 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:55:12.372241 1437114 cri.go:89] found id: ""
	I1209 05:55:12.372275 1437114 logs.go:282] 0 containers: []
	W1209 05:55:12.372311 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:55:12.372318 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:55:12.372384 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:55:12.397758 1437114 cri.go:89] found id: ""
	I1209 05:55:12.397784 1437114 logs.go:282] 0 containers: []
	W1209 05:55:12.397792 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:55:12.397799 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:55:12.397860 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:55:12.422922 1437114 cri.go:89] found id: ""
	I1209 05:55:12.422948 1437114 logs.go:282] 0 containers: []
	W1209 05:55:12.422958 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:55:12.422968 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:55:12.422981 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:55:12.480231 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:55:12.480268 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:55:12.497991 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:55:12.498029 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:55:12.565247 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:55:12.557686    9379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:12.558053    9379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:12.559575    9379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:12.559888    9379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:12.561291    9379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:55:12.557686    9379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:12.558053    9379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:12.559575    9379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:12.559888    9379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:12.561291    9379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:55:12.565279 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:55:12.565293 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:55:12.590420 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:55:12.590459 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:55:15.122535 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:55:15.133065 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:55:15.133140 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:55:15.158369 1437114 cri.go:89] found id: ""
	I1209 05:55:15.158393 1437114 logs.go:282] 0 containers: []
	W1209 05:55:15.158401 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:55:15.158407 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:55:15.158492 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:55:15.184526 1437114 cri.go:89] found id: ""
	I1209 05:55:15.184550 1437114 logs.go:282] 0 containers: []
	W1209 05:55:15.184558 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:55:15.184564 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:55:15.184627 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:55:15.210248 1437114 cri.go:89] found id: ""
	I1209 05:55:15.210288 1437114 logs.go:282] 0 containers: []
	W1209 05:55:15.210300 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:55:15.210312 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:55:15.210376 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:55:15.239458 1437114 cri.go:89] found id: ""
	I1209 05:55:15.239486 1437114 logs.go:282] 0 containers: []
	W1209 05:55:15.239495 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:55:15.239501 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:55:15.239560 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:55:15.265625 1437114 cri.go:89] found id: ""
	I1209 05:55:15.265649 1437114 logs.go:282] 0 containers: []
	W1209 05:55:15.265658 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:55:15.265664 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:55:15.265729 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:55:15.289543 1437114 cri.go:89] found id: ""
	I1209 05:55:15.289577 1437114 logs.go:282] 0 containers: []
	W1209 05:55:15.289587 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:55:15.289593 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:55:15.289663 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:55:15.314575 1437114 cri.go:89] found id: ""
	I1209 05:55:15.314610 1437114 logs.go:282] 0 containers: []
	W1209 05:55:15.314618 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:55:15.314625 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:55:15.314704 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:55:15.339832 1437114 cri.go:89] found id: ""
	I1209 05:55:15.339858 1437114 logs.go:282] 0 containers: []
	W1209 05:55:15.339865 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:55:15.339875 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:55:15.339890 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:55:15.356748 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:55:15.356774 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:55:15.418122 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:55:15.410189    9483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:15.410797    9483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:15.412374    9483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:15.412679    9483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:15.414149    9483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:55:15.410189    9483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:15.410797    9483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:15.412374    9483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:15.412679    9483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:15.414149    9483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:55:15.418145 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:55:15.418157 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:55:15.446826 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:55:15.446866 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:55:15.483531 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:55:15.483560 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:55:18.042444 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:55:18.053775 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:55:18.053853 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:55:18.090768 1437114 cri.go:89] found id: ""
	I1209 05:55:18.090790 1437114 logs.go:282] 0 containers: []
	W1209 05:55:18.090800 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:55:18.090806 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:55:18.090869 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:55:18.117411 1437114 cri.go:89] found id: ""
	I1209 05:55:18.117438 1437114 logs.go:282] 0 containers: []
	W1209 05:55:18.117448 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:55:18.117458 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:55:18.117516 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:55:18.143495 1437114 cri.go:89] found id: ""
	I1209 05:55:18.143523 1437114 logs.go:282] 0 containers: []
	W1209 05:55:18.143531 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:55:18.143538 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:55:18.143601 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:55:18.169282 1437114 cri.go:89] found id: ""
	I1209 05:55:18.169310 1437114 logs.go:282] 0 containers: []
	W1209 05:55:18.169319 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:55:18.169325 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:55:18.169387 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:55:18.194143 1437114 cri.go:89] found id: ""
	I1209 05:55:18.194210 1437114 logs.go:282] 0 containers: []
	W1209 05:55:18.194234 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:55:18.194248 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:55:18.194319 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:55:18.218540 1437114 cri.go:89] found id: ""
	I1209 05:55:18.218564 1437114 logs.go:282] 0 containers: []
	W1209 05:55:18.218573 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:55:18.218579 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:55:18.218635 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:55:18.242500 1437114 cri.go:89] found id: ""
	I1209 05:55:18.242533 1437114 logs.go:282] 0 containers: []
	W1209 05:55:18.242541 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:55:18.242554 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:55:18.242625 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:55:18.268163 1437114 cri.go:89] found id: ""
	I1209 05:55:18.268189 1437114 logs.go:282] 0 containers: []
	W1209 05:55:18.268198 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:55:18.268207 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:55:18.268219 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:55:18.325316 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:55:18.325352 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:55:18.341326 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:55:18.341355 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:55:18.406565 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:55:18.398134    9600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:18.398838    9600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:18.400544    9600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:18.401064    9600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:18.402624    9600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:55:18.398134    9600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:18.398838    9600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:18.400544    9600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:18.401064    9600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:18.402624    9600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:55:18.406588 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:55:18.406601 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:55:18.432715 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:55:18.433008 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:55:20.971861 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:55:20.983326 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:55:20.983402 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:55:21.009562 1437114 cri.go:89] found id: ""
	I1209 05:55:21.009588 1437114 logs.go:282] 0 containers: []
	W1209 05:55:21.009598 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:55:21.009606 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:55:21.009671 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:55:21.034329 1437114 cri.go:89] found id: ""
	I1209 05:55:21.034355 1437114 logs.go:282] 0 containers: []
	W1209 05:55:21.034364 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:55:21.034370 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:55:21.034444 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:55:21.058554 1437114 cri.go:89] found id: ""
	I1209 05:55:21.058575 1437114 logs.go:282] 0 containers: []
	W1209 05:55:21.058584 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:55:21.058592 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:55:21.058648 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:55:21.086391 1437114 cri.go:89] found id: ""
	I1209 05:55:21.086416 1437114 logs.go:282] 0 containers: []
	W1209 05:55:21.086425 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:55:21.086432 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:55:21.086495 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:55:21.113734 1437114 cri.go:89] found id: ""
	I1209 05:55:21.113757 1437114 logs.go:282] 0 containers: []
	W1209 05:55:21.113771 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:55:21.113777 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:55:21.113836 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:55:21.138081 1437114 cri.go:89] found id: ""
	I1209 05:55:21.138106 1437114 logs.go:282] 0 containers: []
	W1209 05:55:21.138115 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:55:21.138122 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:55:21.138188 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:55:21.162430 1437114 cri.go:89] found id: ""
	I1209 05:55:21.162454 1437114 logs.go:282] 0 containers: []
	W1209 05:55:21.162462 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:55:21.162468 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:55:21.162527 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:55:21.187241 1437114 cri.go:89] found id: ""
	I1209 05:55:21.187269 1437114 logs.go:282] 0 containers: []
	W1209 05:55:21.187277 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:55:21.187286 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:55:21.187298 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:55:21.243731 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:55:21.243768 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:55:21.259723 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:55:21.259752 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:55:21.331265 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:55:21.322926    9714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:21.323669    9714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:21.325163    9714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:21.325582    9714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:21.327036    9714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:55:21.322926    9714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:21.323669    9714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:21.325163    9714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:21.325582    9714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:21.327036    9714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:55:21.331287 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:55:21.331300 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:55:21.357424 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:55:21.357460 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:55:23.888418 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:55:23.899458 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:55:23.899526 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:55:23.923896 1437114 cri.go:89] found id: ""
	I1209 05:55:23.923962 1437114 logs.go:282] 0 containers: []
	W1209 05:55:23.923986 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:55:23.924004 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:55:23.924112 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:55:23.951339 1437114 cri.go:89] found id: ""
	I1209 05:55:23.951409 1437114 logs.go:282] 0 containers: []
	W1209 05:55:23.951432 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:55:23.951450 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:55:23.951535 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:55:23.980727 1437114 cri.go:89] found id: ""
	I1209 05:55:23.980797 1437114 logs.go:282] 0 containers: []
	W1209 05:55:23.980821 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:55:23.980838 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:55:23.980927 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:55:24.018661 1437114 cri.go:89] found id: ""
	I1209 05:55:24.018691 1437114 logs.go:282] 0 containers: []
	W1209 05:55:24.018702 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:55:24.018709 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:55:24.018778 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:55:24.049508 1437114 cri.go:89] found id: ""
	I1209 05:55:24.049536 1437114 logs.go:282] 0 containers: []
	W1209 05:55:24.049545 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:55:24.049551 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:55:24.049610 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:55:24.074712 1437114 cri.go:89] found id: ""
	I1209 05:55:24.074741 1437114 logs.go:282] 0 containers: []
	W1209 05:55:24.074751 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:55:24.074757 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:55:24.074825 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:55:24.100769 1437114 cri.go:89] found id: ""
	I1209 05:55:24.100795 1437114 logs.go:282] 0 containers: []
	W1209 05:55:24.100804 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:55:24.100810 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:55:24.100871 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:55:24.125003 1437114 cri.go:89] found id: ""
	I1209 05:55:24.125031 1437114 logs.go:282] 0 containers: []
	W1209 05:55:24.125039 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:55:24.125049 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:55:24.125061 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:55:24.194763 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:55:24.186517    9818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:24.187020    9818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:24.188525    9818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:24.188998    9818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:24.190667    9818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:55:24.186517    9818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:24.187020    9818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:24.188525    9818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:24.188998    9818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:24.190667    9818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:55:24.194832 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:55:24.194870 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:55:24.220205 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:55:24.220239 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:55:24.246742 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:55:24.246769 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:55:24.303551 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:55:24.303584 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:55:26.819975 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:55:26.830655 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:55:26.830725 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:55:26.858629 1437114 cri.go:89] found id: ""
	I1209 05:55:26.858653 1437114 logs.go:282] 0 containers: []
	W1209 05:55:26.858661 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:55:26.858667 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:55:26.858733 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:55:26.883327 1437114 cri.go:89] found id: ""
	I1209 05:55:26.883354 1437114 logs.go:282] 0 containers: []
	W1209 05:55:26.883363 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:55:26.883369 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:55:26.883431 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:55:26.909455 1437114 cri.go:89] found id: ""
	I1209 05:55:26.909475 1437114 logs.go:282] 0 containers: []
	W1209 05:55:26.909484 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:55:26.909490 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:55:26.909551 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:55:26.940313 1437114 cri.go:89] found id: ""
	I1209 05:55:26.940345 1437114 logs.go:282] 0 containers: []
	W1209 05:55:26.940358 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:55:26.940365 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:55:26.940432 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:55:26.974610 1437114 cri.go:89] found id: ""
	I1209 05:55:26.974686 1437114 logs.go:282] 0 containers: []
	W1209 05:55:26.974708 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:55:26.974725 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:55:26.974817 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:55:27.007512 1437114 cri.go:89] found id: ""
	I1209 05:55:27.007592 1437114 logs.go:282] 0 containers: []
	W1209 05:55:27.007616 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:55:27.007637 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:55:27.007748 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:55:27.032955 1437114 cri.go:89] found id: ""
	I1209 05:55:27.033029 1437114 logs.go:282] 0 containers: []
	W1209 05:55:27.033053 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:55:27.033071 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:55:27.033155 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:55:27.057112 1437114 cri.go:89] found id: ""
	I1209 05:55:27.057177 1437114 logs.go:282] 0 containers: []
	W1209 05:55:27.057191 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:55:27.057202 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:55:27.057219 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:55:27.118936 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:55:27.110736    9930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:27.111264    9930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:27.112691    9930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:27.112981    9930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:27.114451    9930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:55:27.110736    9930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:27.111264    9930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:27.112691    9930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:27.112981    9930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:27.114451    9930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:55:27.118961 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:55:27.118974 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:55:27.144106 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:55:27.144179 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:55:27.174234 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:55:27.174260 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:55:27.230096 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:55:27.230129 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:55:29.746369 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:55:29.756575 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:55:29.756649 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:55:29.784727 1437114 cri.go:89] found id: ""
	I1209 05:55:29.784750 1437114 logs.go:282] 0 containers: []
	W1209 05:55:29.784758 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:55:29.784764 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:55:29.784824 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:55:29.808671 1437114 cri.go:89] found id: ""
	I1209 05:55:29.808696 1437114 logs.go:282] 0 containers: []
	W1209 05:55:29.808705 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:55:29.808711 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:55:29.808793 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:55:29.832440 1437114 cri.go:89] found id: ""
	I1209 05:55:29.832470 1437114 logs.go:282] 0 containers: []
	W1209 05:55:29.832479 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:55:29.832485 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:55:29.832549 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:55:29.857781 1437114 cri.go:89] found id: ""
	I1209 05:55:29.857807 1437114 logs.go:282] 0 containers: []
	W1209 05:55:29.857815 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:55:29.857821 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:55:29.857901 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:55:29.882048 1437114 cri.go:89] found id: ""
	I1209 05:55:29.882073 1437114 logs.go:282] 0 containers: []
	W1209 05:55:29.882081 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:55:29.882087 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:55:29.882176 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:55:29.905398 1437114 cri.go:89] found id: ""
	I1209 05:55:29.905422 1437114 logs.go:282] 0 containers: []
	W1209 05:55:29.905431 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:55:29.905438 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:55:29.905526 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:55:29.931783 1437114 cri.go:89] found id: ""
	I1209 05:55:29.931816 1437114 logs.go:282] 0 containers: []
	W1209 05:55:29.931824 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:55:29.931831 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:55:29.931903 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:55:29.961633 1437114 cri.go:89] found id: ""
	I1209 05:55:29.961665 1437114 logs.go:282] 0 containers: []
	W1209 05:55:29.961673 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:55:29.961683 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:55:29.961695 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:55:30.041769 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:55:30.025451   10042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:30.026529   10042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:30.027374   10042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:30.029780   10042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:30.030693   10042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:55:30.025451   10042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:30.026529   10042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:30.027374   10042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:30.029780   10042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:30.030693   10042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:55:30.041793 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:55:30.041807 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:55:30.069912 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:55:30.069946 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:55:30.104202 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:55:30.104232 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:55:30.162750 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:55:30.162784 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:55:32.680152 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:55:32.694260 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:55:32.694425 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:55:32.728965 1437114 cri.go:89] found id: ""
	I1209 05:55:32.729064 1437114 logs.go:282] 0 containers: []
	W1209 05:55:32.729088 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:55:32.729108 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:55:32.729212 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:55:32.760196 1437114 cri.go:89] found id: ""
	I1209 05:55:32.760220 1437114 logs.go:282] 0 containers: []
	W1209 05:55:32.760228 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:55:32.760235 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:55:32.760303 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:55:32.785415 1437114 cri.go:89] found id: ""
	I1209 05:55:32.785448 1437114 logs.go:282] 0 containers: []
	W1209 05:55:32.785457 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:55:32.785463 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:55:32.785528 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:55:32.809252 1437114 cri.go:89] found id: ""
	I1209 05:55:32.809327 1437114 logs.go:282] 0 containers: []
	W1209 05:55:32.809343 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:55:32.809357 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:55:32.809417 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:55:32.834255 1437114 cri.go:89] found id: ""
	I1209 05:55:32.834281 1437114 logs.go:282] 0 containers: []
	W1209 05:55:32.834295 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:55:32.834302 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:55:32.834362 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:55:32.859400 1437114 cri.go:89] found id: ""
	I1209 05:55:32.859426 1437114 logs.go:282] 0 containers: []
	W1209 05:55:32.859443 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:55:32.859450 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:55:32.859519 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:55:32.897012 1437114 cri.go:89] found id: ""
	I1209 05:55:32.897037 1437114 logs.go:282] 0 containers: []
	W1209 05:55:32.897046 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:55:32.897053 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:55:32.897167 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:55:32.921653 1437114 cri.go:89] found id: ""
	I1209 05:55:32.921685 1437114 logs.go:282] 0 containers: []
	W1209 05:55:32.921693 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:55:32.921703 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:55:32.921713 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:55:32.948373 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:55:32.948454 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:55:32.981605 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:55:32.981678 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:55:33.043445 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:55:33.043481 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:55:33.059128 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:55:33.059160 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:55:33.122257 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:55:33.113462   10172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:33.113864   10172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:33.115638   10172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:33.116342   10172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:33.117919   10172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:55:33.113462   10172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:33.113864   10172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:33.115638   10172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:33.116342   10172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:33.117919   10172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:55:35.623296 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:55:35.635539 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:55:35.635647 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:55:35.663702 1437114 cri.go:89] found id: ""
	I1209 05:55:35.663741 1437114 logs.go:282] 0 containers: []
	W1209 05:55:35.663753 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:55:35.663760 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:55:35.663865 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:55:35.707406 1437114 cri.go:89] found id: ""
	I1209 05:55:35.707485 1437114 logs.go:282] 0 containers: []
	W1209 05:55:35.707508 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:55:35.707544 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:55:35.707629 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:55:35.734669 1437114 cri.go:89] found id: ""
	I1209 05:55:35.734749 1437114 logs.go:282] 0 containers: []
	W1209 05:55:35.734771 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:55:35.734811 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:55:35.734897 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:55:35.764935 1437114 cri.go:89] found id: ""
	I1209 05:55:35.765012 1437114 logs.go:282] 0 containers: []
	W1209 05:55:35.765036 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:55:35.765054 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:55:35.765127 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:55:35.788829 1437114 cri.go:89] found id: ""
	I1209 05:55:35.788853 1437114 logs.go:282] 0 containers: []
	W1209 05:55:35.788869 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:55:35.788876 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:55:35.788978 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:55:35.813639 1437114 cri.go:89] found id: ""
	I1209 05:55:35.813666 1437114 logs.go:282] 0 containers: []
	W1209 05:55:35.813674 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:55:35.813681 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:55:35.813787 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:55:35.843416 1437114 cri.go:89] found id: ""
	I1209 05:55:35.843460 1437114 logs.go:282] 0 containers: []
	W1209 05:55:35.843469 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:55:35.843481 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:55:35.843555 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:55:35.868194 1437114 cri.go:89] found id: ""
	I1209 05:55:35.868221 1437114 logs.go:282] 0 containers: []
	W1209 05:55:35.868231 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:55:35.868239 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:55:35.868251 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:55:35.925041 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:55:35.925080 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:55:35.951129 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:55:35.951341 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:55:36.030987 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:55:36.022457   10271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:36.023229   10271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:36.023993   10271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:36.025131   10271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:36.025699   10271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:55:36.022457   10271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:36.023229   10271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:36.023993   10271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:36.025131   10271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:36.025699   10271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:55:36.031012 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:55:36.031026 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:55:36.058849 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:55:36.058884 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:55:38.588358 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:55:38.598423 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:55:38.598488 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:55:38.622572 1437114 cri.go:89] found id: ""
	I1209 05:55:38.622596 1437114 logs.go:282] 0 containers: []
	W1209 05:55:38.622605 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:55:38.622612 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:55:38.622669 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:55:38.650917 1437114 cri.go:89] found id: ""
	I1209 05:55:38.650942 1437114 logs.go:282] 0 containers: []
	W1209 05:55:38.650950 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:55:38.650956 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:55:38.651013 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:55:38.677402 1437114 cri.go:89] found id: ""
	I1209 05:55:38.677435 1437114 logs.go:282] 0 containers: []
	W1209 05:55:38.677444 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:55:38.677451 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:55:38.677558 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:55:38.707295 1437114 cri.go:89] found id: ""
	I1209 05:55:38.707328 1437114 logs.go:282] 0 containers: []
	W1209 05:55:38.707337 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:55:38.707344 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:55:38.707453 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:55:38.740627 1437114 cri.go:89] found id: ""
	I1209 05:55:38.740652 1437114 logs.go:282] 0 containers: []
	W1209 05:55:38.740660 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:55:38.740667 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:55:38.740727 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:55:38.764991 1437114 cri.go:89] found id: ""
	I1209 05:55:38.765017 1437114 logs.go:282] 0 containers: []
	W1209 05:55:38.765027 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:55:38.765033 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:55:38.765095 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:55:38.789303 1437114 cri.go:89] found id: ""
	I1209 05:55:38.789328 1437114 logs.go:282] 0 containers: []
	W1209 05:55:38.789336 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:55:38.789343 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:55:38.789401 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:55:38.812509 1437114 cri.go:89] found id: ""
	I1209 05:55:38.812533 1437114 logs.go:282] 0 containers: []
	W1209 05:55:38.812541 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:55:38.812551 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:55:38.812562 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:55:38.869277 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:55:38.869309 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:55:38.885634 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:55:38.885663 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:55:38.967787 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:55:38.957406   10382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:38.958335   10382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:38.960358   10382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:38.961032   10382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:38.963013   10382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:55:38.957406   10382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:38.958335   10382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:38.960358   10382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:38.961032   10382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:38.963013   10382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:55:38.967812 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:55:38.967828 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:55:39.000576 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:55:39.000615 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:55:41.533393 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:55:41.544133 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:55:41.544208 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:55:41.569392 1437114 cri.go:89] found id: ""
	I1209 05:55:41.569418 1437114 logs.go:282] 0 containers: []
	W1209 05:55:41.569428 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:55:41.569436 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:55:41.569499 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:55:41.595491 1437114 cri.go:89] found id: ""
	I1209 05:55:41.595517 1437114 logs.go:282] 0 containers: []
	W1209 05:55:41.595526 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:55:41.595532 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:55:41.595592 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:55:41.622211 1437114 cri.go:89] found id: ""
	I1209 05:55:41.622246 1437114 logs.go:282] 0 containers: []
	W1209 05:55:41.622256 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:55:41.622263 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:55:41.622323 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:55:41.646745 1437114 cri.go:89] found id: ""
	I1209 05:55:41.646770 1437114 logs.go:282] 0 containers: []
	W1209 05:55:41.646779 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:55:41.646785 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:55:41.646846 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:55:41.674665 1437114 cri.go:89] found id: ""
	I1209 05:55:41.674689 1437114 logs.go:282] 0 containers: []
	W1209 05:55:41.674699 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:55:41.674706 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:55:41.674768 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:55:41.702586 1437114 cri.go:89] found id: ""
	I1209 05:55:41.702610 1437114 logs.go:282] 0 containers: []
	W1209 05:55:41.702619 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:55:41.702628 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:55:41.702704 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:55:41.741493 1437114 cri.go:89] found id: ""
	I1209 05:55:41.741515 1437114 logs.go:282] 0 containers: []
	W1209 05:55:41.741523 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:55:41.741530 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:55:41.741666 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:55:41.768353 1437114 cri.go:89] found id: ""
	I1209 05:55:41.768465 1437114 logs.go:282] 0 containers: []
	W1209 05:55:41.768479 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:55:41.768490 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:55:41.768529 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:55:41.831484 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:55:41.823412   10488 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:41.824163   10488 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:41.825769   10488 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:41.826063   10488 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:41.827557   10488 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:55:41.823412   10488 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:41.824163   10488 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:41.825769   10488 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:41.826063   10488 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:41.827557   10488 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:55:41.831504 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:55:41.831517 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:55:41.857187 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:55:41.857222 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:55:41.887092 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:55:41.887123 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:55:41.943306 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:55:41.943341 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:55:44.461424 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:55:44.472240 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:55:44.472340 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:55:44.498935 1437114 cri.go:89] found id: ""
	I1209 05:55:44.498961 1437114 logs.go:282] 0 containers: []
	W1209 05:55:44.498970 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:55:44.498976 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:55:44.499034 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:55:44.523535 1437114 cri.go:89] found id: ""
	I1209 05:55:44.523564 1437114 logs.go:282] 0 containers: []
	W1209 05:55:44.523573 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:55:44.523579 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:55:44.523637 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:55:44.548432 1437114 cri.go:89] found id: ""
	I1209 05:55:44.548455 1437114 logs.go:282] 0 containers: []
	W1209 05:55:44.548463 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:55:44.548469 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:55:44.548526 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:55:44.573002 1437114 cri.go:89] found id: ""
	I1209 05:55:44.573024 1437114 logs.go:282] 0 containers: []
	W1209 05:55:44.573034 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:55:44.573040 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:55:44.573098 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:55:44.596807 1437114 cri.go:89] found id: ""
	I1209 05:55:44.596829 1437114 logs.go:282] 0 containers: []
	W1209 05:55:44.596838 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:55:44.596846 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:55:44.596901 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:55:44.624387 1437114 cri.go:89] found id: ""
	I1209 05:55:44.624456 1437114 logs.go:282] 0 containers: []
	W1209 05:55:44.624478 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:55:44.624492 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:55:44.624571 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:55:44.648117 1437114 cri.go:89] found id: ""
	I1209 05:55:44.648143 1437114 logs.go:282] 0 containers: []
	W1209 05:55:44.648151 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:55:44.648158 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:55:44.648229 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:55:44.671908 1437114 cri.go:89] found id: ""
	I1209 05:55:44.671939 1437114 logs.go:282] 0 containers: []
	W1209 05:55:44.671948 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:55:44.671972 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:55:44.671989 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:55:44.732458 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:55:44.732536 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:55:44.753248 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:55:44.753273 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:55:44.822117 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:55:44.814788   10605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:44.815161   10605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:44.816602   10605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:44.816898   10605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:44.818170   10605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:55:44.814788   10605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:44.815161   10605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:44.816602   10605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:44.816898   10605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:44.818170   10605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:55:44.822137 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:55:44.822149 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:55:44.848565 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:55:44.848600 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:55:47.376875 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:55:47.386961 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:55:47.387031 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:55:47.413420 1437114 cri.go:89] found id: ""
	I1209 05:55:47.413444 1437114 logs.go:282] 0 containers: []
	W1209 05:55:47.413452 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:55:47.413458 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:55:47.413519 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:55:47.441969 1437114 cri.go:89] found id: ""
	I1209 05:55:47.442001 1437114 logs.go:282] 0 containers: []
	W1209 05:55:47.442010 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:55:47.442016 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:55:47.442081 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:55:47.465166 1437114 cri.go:89] found id: ""
	I1209 05:55:47.465195 1437114 logs.go:282] 0 containers: []
	W1209 05:55:47.465210 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:55:47.465216 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:55:47.465283 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:55:47.493704 1437114 cri.go:89] found id: ""
	I1209 05:55:47.493730 1437114 logs.go:282] 0 containers: []
	W1209 05:55:47.493739 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:55:47.493745 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:55:47.493821 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:55:47.519554 1437114 cri.go:89] found id: ""
	I1209 05:55:47.519589 1437114 logs.go:282] 0 containers: []
	W1209 05:55:47.519598 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:55:47.519604 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:55:47.519671 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:55:47.549334 1437114 cri.go:89] found id: ""
	I1209 05:55:47.549367 1437114 logs.go:282] 0 containers: []
	W1209 05:55:47.549376 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:55:47.549383 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:55:47.549456 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:55:47.578946 1437114 cri.go:89] found id: ""
	I1209 05:55:47.578980 1437114 logs.go:282] 0 containers: []
	W1209 05:55:47.578989 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:55:47.578995 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:55:47.579062 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:55:47.603683 1437114 cri.go:89] found id: ""
	I1209 05:55:47.603716 1437114 logs.go:282] 0 containers: []
	W1209 05:55:47.603725 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:55:47.603734 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:55:47.603745 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:55:47.619447 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:55:47.619482 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:55:47.687529 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:55:47.675579   10710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:47.676174   10710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:47.679656   10710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:47.680257   10710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:47.681964   10710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:55:47.675579   10710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:47.676174   10710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:47.679656   10710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:47.680257   10710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:47.681964   10710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:55:47.687594 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:55:47.687641 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:55:47.715721 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:55:47.715792 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:55:47.745866 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:55:47.745889 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:55:50.305015 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:55:50.315642 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:55:50.315787 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:55:50.341274 1437114 cri.go:89] found id: ""
	I1209 05:55:50.341298 1437114 logs.go:282] 0 containers: []
	W1209 05:55:50.341306 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:55:50.341314 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:55:50.341370 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:55:50.366500 1437114 cri.go:89] found id: ""
	I1209 05:55:50.366533 1437114 logs.go:282] 0 containers: []
	W1209 05:55:50.366542 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:55:50.366548 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:55:50.366613 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:55:50.390751 1437114 cri.go:89] found id: ""
	I1209 05:55:50.390787 1437114 logs.go:282] 0 containers: []
	W1209 05:55:50.390796 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:55:50.390802 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:55:50.390867 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:55:50.418576 1437114 cri.go:89] found id: ""
	I1209 05:55:50.418601 1437114 logs.go:282] 0 containers: []
	W1209 05:55:50.418610 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:55:50.418616 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:55:50.418683 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:55:50.447207 1437114 cri.go:89] found id: ""
	I1209 05:55:50.447250 1437114 logs.go:282] 0 containers: []
	W1209 05:55:50.447261 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:55:50.447267 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:55:50.447339 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:55:50.476321 1437114 cri.go:89] found id: ""
	I1209 05:55:50.476346 1437114 logs.go:282] 0 containers: []
	W1209 05:55:50.476354 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:55:50.476372 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:55:50.476430 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:55:50.501331 1437114 cri.go:89] found id: ""
	I1209 05:55:50.501356 1437114 logs.go:282] 0 containers: []
	W1209 05:55:50.501365 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:55:50.501371 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:55:50.501439 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:55:50.525182 1437114 cri.go:89] found id: ""
	I1209 05:55:50.525207 1437114 logs.go:282] 0 containers: []
	W1209 05:55:50.525215 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:55:50.525224 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:55:50.525262 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:55:50.584512 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:55:50.584550 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:55:50.600341 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:55:50.600369 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:55:50.667248 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:55:50.658895   10823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:50.659509   10823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:50.661016   10823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:50.661529   10823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:50.663114   10823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:55:50.658895   10823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:50.659509   10823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:50.661016   10823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:50.661529   10823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:50.663114   10823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:55:50.667314 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:55:50.667346 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:55:50.695874 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:55:50.695911 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:55:53.232139 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:55:53.242299 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:55:53.242369 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:55:53.265738 1437114 cri.go:89] found id: ""
	I1209 05:55:53.265763 1437114 logs.go:282] 0 containers: []
	W1209 05:55:53.265771 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:55:53.265777 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:55:53.265834 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:55:53.289547 1437114 cri.go:89] found id: ""
	I1209 05:55:53.289571 1437114 logs.go:282] 0 containers: []
	W1209 05:55:53.289580 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:55:53.289586 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:55:53.289644 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:55:53.314432 1437114 cri.go:89] found id: ""
	I1209 05:55:53.314457 1437114 logs.go:282] 0 containers: []
	W1209 05:55:53.314466 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:55:53.314472 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:55:53.314529 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:55:53.338078 1437114 cri.go:89] found id: ""
	I1209 05:55:53.338100 1437114 logs.go:282] 0 containers: []
	W1209 05:55:53.338109 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:55:53.338115 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:55:53.338190 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:55:53.362597 1437114 cri.go:89] found id: ""
	I1209 05:55:53.362623 1437114 logs.go:282] 0 containers: []
	W1209 05:55:53.362632 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:55:53.362638 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:55:53.362700 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:55:53.387075 1437114 cri.go:89] found id: ""
	I1209 05:55:53.387100 1437114 logs.go:282] 0 containers: []
	W1209 05:55:53.387108 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:55:53.387115 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:55:53.387181 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:55:53.410813 1437114 cri.go:89] found id: ""
	I1209 05:55:53.410836 1437114 logs.go:282] 0 containers: []
	W1209 05:55:53.410845 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:55:53.410850 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:55:53.410910 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:55:53.439085 1437114 cri.go:89] found id: ""
	I1209 05:55:53.439107 1437114 logs.go:282] 0 containers: []
	W1209 05:55:53.439115 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:55:53.439124 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:55:53.439135 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:55:53.496416 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:55:53.496450 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:55:53.512950 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:55:53.512979 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:55:53.592134 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:55:53.583228   10933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:53.583903   10933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:53.585634   10933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:53.586183   10933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:53.587806   10933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:55:53.583228   10933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:53.583903   10933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:53.585634   10933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:53.586183   10933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:53.587806   10933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:55:53.592155 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:55:53.592168 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:55:53.620855 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:55:53.620901 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:55:56.151858 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:55:56.162360 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:55:56.162444 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:55:56.192447 1437114 cri.go:89] found id: ""
	I1209 05:55:56.192474 1437114 logs.go:282] 0 containers: []
	W1209 05:55:56.192482 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:55:56.192488 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:55:56.192545 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:55:56.230900 1437114 cri.go:89] found id: ""
	I1209 05:55:56.230927 1437114 logs.go:282] 0 containers: []
	W1209 05:55:56.230935 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:55:56.230941 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:55:56.231005 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:55:56.264649 1437114 cri.go:89] found id: ""
	I1209 05:55:56.264673 1437114 logs.go:282] 0 containers: []
	W1209 05:55:56.264683 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:55:56.264689 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:55:56.264748 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:55:56.287754 1437114 cri.go:89] found id: ""
	I1209 05:55:56.287780 1437114 logs.go:282] 0 containers: []
	W1209 05:55:56.287788 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:55:56.287794 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:55:56.287851 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:55:56.311939 1437114 cri.go:89] found id: ""
	I1209 05:55:56.311966 1437114 logs.go:282] 0 containers: []
	W1209 05:55:56.311974 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:55:56.311981 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:55:56.312071 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:55:56.336812 1437114 cri.go:89] found id: ""
	I1209 05:55:56.336847 1437114 logs.go:282] 0 containers: []
	W1209 05:55:56.336856 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:55:56.336862 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:55:56.336926 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:55:56.364355 1437114 cri.go:89] found id: ""
	I1209 05:55:56.364378 1437114 logs.go:282] 0 containers: []
	W1209 05:55:56.364387 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:55:56.364394 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:55:56.364451 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:55:56.388220 1437114 cri.go:89] found id: ""
	I1209 05:55:56.388242 1437114 logs.go:282] 0 containers: []
	W1209 05:55:56.388251 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:55:56.388260 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:55:56.388272 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:55:56.451922 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:55:56.443739   11039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:56.444234   11039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:56.445911   11039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:56.446440   11039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:56.448091   11039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:55:56.443739   11039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:56.444234   11039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:56.445911   11039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:56.446440   11039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:56.448091   11039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:55:56.451941 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:55:56.451955 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:55:56.477213 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:55:56.477256 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:55:56.504874 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:55:56.504908 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:55:56.561753 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:55:56.561793 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:55:59.078916 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:55:59.089470 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:55:59.089545 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:55:59.113298 1437114 cri.go:89] found id: ""
	I1209 05:55:59.113324 1437114 logs.go:282] 0 containers: []
	W1209 05:55:59.113332 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:55:59.113339 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:55:59.113402 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:55:59.141250 1437114 cri.go:89] found id: ""
	I1209 05:55:59.141278 1437114 logs.go:282] 0 containers: []
	W1209 05:55:59.141286 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:55:59.141292 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:55:59.141351 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:55:59.170020 1437114 cri.go:89] found id: ""
	I1209 05:55:59.170044 1437114 logs.go:282] 0 containers: []
	W1209 05:55:59.170052 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:55:59.170059 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:55:59.170122 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:55:59.210757 1437114 cri.go:89] found id: ""
	I1209 05:55:59.210792 1437114 logs.go:282] 0 containers: []
	W1209 05:55:59.210801 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:55:59.210808 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:55:59.210873 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:55:59.237433 1437114 cri.go:89] found id: ""
	I1209 05:55:59.237470 1437114 logs.go:282] 0 containers: []
	W1209 05:55:59.237479 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:55:59.237486 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:55:59.237551 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:55:59.263923 1437114 cri.go:89] found id: ""
	I1209 05:55:59.263959 1437114 logs.go:282] 0 containers: []
	W1209 05:55:59.263968 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:55:59.263975 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:55:59.264071 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:55:59.288850 1437114 cri.go:89] found id: ""
	I1209 05:55:59.288916 1437114 logs.go:282] 0 containers: []
	W1209 05:55:59.288940 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:55:59.288954 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:55:59.289029 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:55:59.316320 1437114 cri.go:89] found id: ""
	I1209 05:55:59.316347 1437114 logs.go:282] 0 containers: []
	W1209 05:55:59.316356 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:55:59.316365 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:55:59.316376 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:55:59.383644 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:55:59.373968   11152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:59.374816   11152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:59.376482   11152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:59.376830   11152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:59.378974   11152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:55:59.373968   11152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:59.374816   11152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:59.376482   11152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:59.376830   11152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:55:59.378974   11152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:55:59.383666 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:55:59.383680 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:55:59.409556 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:55:59.409591 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:55:59.440707 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:55:59.440737 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:55:59.496851 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:55:59.496887 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:56:02.013397 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:56:02.023815 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:56:02.023883 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:56:02.054212 1437114 cri.go:89] found id: ""
	I1209 05:56:02.054240 1437114 logs.go:282] 0 containers: []
	W1209 05:56:02.054249 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:56:02.054255 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:56:02.054323 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:56:02.079245 1437114 cri.go:89] found id: ""
	I1209 05:56:02.079274 1437114 logs.go:282] 0 containers: []
	W1209 05:56:02.079283 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:56:02.079289 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:56:02.079347 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:56:02.104356 1437114 cri.go:89] found id: ""
	I1209 05:56:02.104399 1437114 logs.go:282] 0 containers: []
	W1209 05:56:02.104408 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:56:02.104415 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:56:02.104478 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:56:02.129688 1437114 cri.go:89] found id: ""
	I1209 05:56:02.129753 1437114 logs.go:282] 0 containers: []
	W1209 05:56:02.129777 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:56:02.129795 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:56:02.129886 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:56:02.159435 1437114 cri.go:89] found id: ""
	I1209 05:56:02.159463 1437114 logs.go:282] 0 containers: []
	W1209 05:56:02.159471 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:56:02.159478 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:56:02.159537 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:56:02.193945 1437114 cri.go:89] found id: ""
	I1209 05:56:02.193969 1437114 logs.go:282] 0 containers: []
	W1209 05:56:02.193987 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:56:02.193994 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:56:02.194093 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:56:02.230499 1437114 cri.go:89] found id: ""
	I1209 05:56:02.230528 1437114 logs.go:282] 0 containers: []
	W1209 05:56:02.230542 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:56:02.230549 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:56:02.230650 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:56:02.261955 1437114 cri.go:89] found id: ""
	I1209 05:56:02.262021 1437114 logs.go:282] 0 containers: []
	W1209 05:56:02.262046 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:56:02.262063 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:56:02.262075 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:56:02.278208 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:56:02.278245 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:56:02.342511 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:56:02.334138   11267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:02.334823   11267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:02.336452   11267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:02.337017   11267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:02.338543   11267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:56:02.334138   11267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:02.334823   11267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:02.336452   11267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:02.337017   11267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:02.338543   11267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:56:02.342581 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:56:02.342603 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:56:02.367883 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:56:02.367920 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:56:02.398560 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:56:02.398587 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:56:04.956142 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:56:04.966664 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:56:04.966728 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:56:05.000769 1437114 cri.go:89] found id: ""
	I1209 05:56:05.000792 1437114 logs.go:282] 0 containers: []
	W1209 05:56:05.000801 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:56:05.000807 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:56:05.000868 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:56:05.030686 1437114 cri.go:89] found id: ""
	I1209 05:56:05.030713 1437114 logs.go:282] 0 containers: []
	W1209 05:56:05.030726 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:56:05.030733 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:56:05.030792 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:56:05.055515 1437114 cri.go:89] found id: ""
	I1209 05:56:05.055541 1437114 logs.go:282] 0 containers: []
	W1209 05:56:05.055550 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:56:05.055556 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:56:05.055614 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:56:05.080557 1437114 cri.go:89] found id: ""
	I1209 05:56:05.080584 1437114 logs.go:282] 0 containers: []
	W1209 05:56:05.080593 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:56:05.080599 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:56:05.080659 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:56:05.106686 1437114 cri.go:89] found id: ""
	I1209 05:56:05.106714 1437114 logs.go:282] 0 containers: []
	W1209 05:56:05.106724 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:56:05.106731 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:56:05.106792 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:56:05.131985 1437114 cri.go:89] found id: ""
	I1209 05:56:05.132044 1437114 logs.go:282] 0 containers: []
	W1209 05:56:05.132053 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:56:05.132060 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:56:05.132127 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:56:05.158936 1437114 cri.go:89] found id: ""
	I1209 05:56:05.159002 1437114 logs.go:282] 0 containers: []
	W1209 05:56:05.159027 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:56:05.159045 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:56:05.159134 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:56:05.186586 1437114 cri.go:89] found id: ""
	I1209 05:56:05.186658 1437114 logs.go:282] 0 containers: []
	W1209 05:56:05.186682 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:56:05.186704 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:56:05.186745 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:56:05.252531 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:56:05.252568 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:56:05.268794 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:56:05.268823 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:56:05.330847 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:56:05.322209   11379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:05.322901   11379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:05.324496   11379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:05.325041   11379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:05.326643   11379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:56:05.322209   11379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:05.322901   11379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:05.324496   11379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:05.325041   11379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:05.326643   11379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:56:05.330870 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:56:05.330882 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:56:05.356845 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:56:05.356877 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:56:07.894100 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:56:07.904726 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:56:07.904808 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:56:07.934685 1437114 cri.go:89] found id: ""
	I1209 05:56:07.934707 1437114 logs.go:282] 0 containers: []
	W1209 05:56:07.934715 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:56:07.934727 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:56:07.934786 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:56:07.966688 1437114 cri.go:89] found id: ""
	I1209 05:56:07.966715 1437114 logs.go:282] 0 containers: []
	W1209 05:56:07.966724 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:56:07.966730 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:56:07.966791 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:56:07.997688 1437114 cri.go:89] found id: ""
	I1209 05:56:07.997718 1437114 logs.go:282] 0 containers: []
	W1209 05:56:07.997727 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:56:07.997733 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:56:07.997794 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:56:08.028703 1437114 cri.go:89] found id: ""
	I1209 05:56:08.028738 1437114 logs.go:282] 0 containers: []
	W1209 05:56:08.028748 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:56:08.028756 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:56:08.028836 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:56:08.055186 1437114 cri.go:89] found id: ""
	I1209 05:56:08.055216 1437114 logs.go:282] 0 containers: []
	W1209 05:56:08.055225 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:56:08.055232 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:56:08.055298 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:56:08.081977 1437114 cri.go:89] found id: ""
	I1209 05:56:08.082005 1437114 logs.go:282] 0 containers: []
	W1209 05:56:08.082014 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:56:08.082020 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:56:08.082094 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:56:08.106694 1437114 cri.go:89] found id: ""
	I1209 05:56:08.106719 1437114 logs.go:282] 0 containers: []
	W1209 05:56:08.106728 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:56:08.106735 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:56:08.106794 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:56:08.131242 1437114 cri.go:89] found id: ""
	I1209 05:56:08.131266 1437114 logs.go:282] 0 containers: []
	W1209 05:56:08.131274 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:56:08.131284 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:56:08.131296 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:56:08.200236 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:56:08.191954   11490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:08.192809   11490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:08.194381   11490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:08.194676   11490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:08.196205   11490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:56:08.191954   11490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:08.192809   11490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:08.194381   11490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:08.194676   11490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:08.196205   11490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:56:08.200261 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:56:08.200275 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:56:08.228642 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:56:08.228684 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:56:08.262181 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:56:08.262210 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:56:08.316796 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:56:08.316828 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:56:10.832826 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:56:10.843625 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:56:10.843696 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:56:10.867741 1437114 cri.go:89] found id: ""
	I1209 05:56:10.867808 1437114 logs.go:282] 0 containers: []
	W1209 05:56:10.867832 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:56:10.867854 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:56:10.867940 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:56:10.893251 1437114 cri.go:89] found id: ""
	I1209 05:56:10.893284 1437114 logs.go:282] 0 containers: []
	W1209 05:56:10.893292 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:56:10.893298 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:56:10.893357 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:56:10.921874 1437114 cri.go:89] found id: ""
	I1209 05:56:10.921897 1437114 logs.go:282] 0 containers: []
	W1209 05:56:10.921906 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:56:10.921912 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:56:10.921977 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:56:10.948235 1437114 cri.go:89] found id: ""
	I1209 05:56:10.948257 1437114 logs.go:282] 0 containers: []
	W1209 05:56:10.948272 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:56:10.948279 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:56:10.948337 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:56:10.977204 1437114 cri.go:89] found id: ""
	I1209 05:56:10.977226 1437114 logs.go:282] 0 containers: []
	W1209 05:56:10.977234 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:56:10.977239 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:56:10.977298 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:56:11.011653 1437114 cri.go:89] found id: ""
	I1209 05:56:11.011677 1437114 logs.go:282] 0 containers: []
	W1209 05:56:11.011685 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:56:11.011692 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:56:11.011753 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:56:11.038552 1437114 cri.go:89] found id: ""
	I1209 05:56:11.038575 1437114 logs.go:282] 0 containers: []
	W1209 05:56:11.038584 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:56:11.038589 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:56:11.038648 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:56:11.068058 1437114 cri.go:89] found id: ""
	I1209 05:56:11.068081 1437114 logs.go:282] 0 containers: []
	W1209 05:56:11.068089 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:56:11.068098 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:56:11.068109 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:56:11.124172 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:56:11.124208 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:56:11.140275 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:56:11.140316 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:56:11.220317 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:56:11.212396   11608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:11.213001   11608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:11.214543   11608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:11.215016   11608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:11.216494   11608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:56:11.212396   11608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:11.213001   11608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:11.214543   11608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:11.215016   11608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:11.216494   11608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:56:11.220349 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:56:11.220362 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:56:11.245629 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:56:11.245662 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:56:13.776003 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:56:13.786369 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:56:13.786448 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:56:13.809520 1437114 cri.go:89] found id: ""
	I1209 05:56:13.809544 1437114 logs.go:282] 0 containers: []
	W1209 05:56:13.809553 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:56:13.809559 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:56:13.809618 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:56:13.833347 1437114 cri.go:89] found id: ""
	I1209 05:56:13.833370 1437114 logs.go:282] 0 containers: []
	W1209 05:56:13.833378 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:56:13.833384 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:56:13.833446 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:56:13.857799 1437114 cri.go:89] found id: ""
	I1209 05:56:13.857830 1437114 logs.go:282] 0 containers: []
	W1209 05:56:13.857840 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:56:13.857846 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:56:13.857906 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:56:13.882625 1437114 cri.go:89] found id: ""
	I1209 05:56:13.882658 1437114 logs.go:282] 0 containers: []
	W1209 05:56:13.882667 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:56:13.882673 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:56:13.882742 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:56:13.910846 1437114 cri.go:89] found id: ""
	I1209 05:56:13.910880 1437114 logs.go:282] 0 containers: []
	W1209 05:56:13.910889 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:56:13.910895 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:56:13.910962 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:56:13.942418 1437114 cri.go:89] found id: ""
	I1209 05:56:13.942483 1437114 logs.go:282] 0 containers: []
	W1209 05:56:13.942510 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:56:13.942528 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:56:13.942615 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:56:13.972617 1437114 cri.go:89] found id: ""
	I1209 05:56:13.972686 1437114 logs.go:282] 0 containers: []
	W1209 05:56:13.972710 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:56:13.972728 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:56:13.972814 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:56:14.010643 1437114 cri.go:89] found id: ""
	I1209 05:56:14.010672 1437114 logs.go:282] 0 containers: []
	W1209 05:56:14.010690 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:56:14.010712 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:56:14.010743 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:56:14.045403 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:56:14.045489 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:56:14.103757 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:56:14.103793 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:56:14.119622 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:56:14.119648 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:56:14.199726 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:56:14.184447   11733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:14.191243   11733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:14.191781   11733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:14.193355   11733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:14.193912   11733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:56:14.184447   11733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:14.191243   11733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:14.191781   11733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:14.193355   11733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:14.193912   11733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:56:14.199794 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:56:14.199821 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:56:16.729940 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:56:16.740423 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:56:16.740497 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:56:16.765729 1437114 cri.go:89] found id: ""
	I1209 05:56:16.765755 1437114 logs.go:282] 0 containers: []
	W1209 05:56:16.765763 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:56:16.765770 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:56:16.765831 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:56:16.793724 1437114 cri.go:89] found id: ""
	I1209 05:56:16.793750 1437114 logs.go:282] 0 containers: []
	W1209 05:56:16.793759 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:56:16.793765 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:56:16.793824 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:56:16.821402 1437114 cri.go:89] found id: ""
	I1209 05:56:16.821429 1437114 logs.go:282] 0 containers: []
	W1209 05:56:16.821437 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:56:16.821444 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:56:16.821504 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:56:16.846074 1437114 cri.go:89] found id: ""
	I1209 05:56:16.846101 1437114 logs.go:282] 0 containers: []
	W1209 05:56:16.846110 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:56:16.846116 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:56:16.846175 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:56:16.870665 1437114 cri.go:89] found id: ""
	I1209 05:56:16.870689 1437114 logs.go:282] 0 containers: []
	W1209 05:56:16.870698 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:56:16.870705 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:56:16.870785 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:56:16.894509 1437114 cri.go:89] found id: ""
	I1209 05:56:16.894542 1437114 logs.go:282] 0 containers: []
	W1209 05:56:16.894550 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:56:16.894557 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:56:16.894651 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:56:16.921935 1437114 cri.go:89] found id: ""
	I1209 05:56:16.921962 1437114 logs.go:282] 0 containers: []
	W1209 05:56:16.921971 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:56:16.921977 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:56:16.922049 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:56:16.950536 1437114 cri.go:89] found id: ""
	I1209 05:56:16.950570 1437114 logs.go:282] 0 containers: []
	W1209 05:56:16.950579 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:56:16.950588 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:56:16.950599 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:56:17.008406 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:56:17.008442 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:56:17.024072 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:56:17.024098 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:56:17.089436 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:56:17.080816   11835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:17.081612   11835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:17.083298   11835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:17.083818   11835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:17.085479   11835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:56:17.080816   11835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:17.081612   11835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:17.083298   11835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:17.083818   11835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:17.085479   11835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:56:17.089456 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:56:17.089468 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:56:17.114751 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:56:17.114785 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:56:19.649189 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:56:19.659355 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:56:19.659709 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:56:19.687359 1437114 cri.go:89] found id: ""
	I1209 05:56:19.687393 1437114 logs.go:282] 0 containers: []
	W1209 05:56:19.687402 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:56:19.687408 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:56:19.687482 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:56:19.711167 1437114 cri.go:89] found id: ""
	I1209 05:56:19.711241 1437114 logs.go:282] 0 containers: []
	W1209 05:56:19.711264 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:56:19.711282 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:56:19.711361 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:56:19.734776 1437114 cri.go:89] found id: ""
	I1209 05:56:19.734843 1437114 logs.go:282] 0 containers: []
	W1209 05:56:19.734868 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:56:19.734886 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:56:19.734978 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:56:19.758075 1437114 cri.go:89] found id: ""
	I1209 05:56:19.758101 1437114 logs.go:282] 0 containers: []
	W1209 05:56:19.758111 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:56:19.758117 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:56:19.758191 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:56:19.781866 1437114 cri.go:89] found id: ""
	I1209 05:56:19.781889 1437114 logs.go:282] 0 containers: []
	W1209 05:56:19.781897 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:56:19.781903 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:56:19.782011 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:56:19.806779 1437114 cri.go:89] found id: ""
	I1209 05:56:19.806811 1437114 logs.go:282] 0 containers: []
	W1209 05:56:19.806820 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:56:19.806827 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:56:19.806896 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:56:19.830892 1437114 cri.go:89] found id: ""
	I1209 05:56:19.830931 1437114 logs.go:282] 0 containers: []
	W1209 05:56:19.830940 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:56:19.830946 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:56:19.831013 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:56:19.855119 1437114 cri.go:89] found id: ""
	I1209 05:56:19.855151 1437114 logs.go:282] 0 containers: []
	W1209 05:56:19.855160 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:56:19.855168 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:56:19.855180 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:56:19.918437 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:56:19.910860   11934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:19.911393   11934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:19.912883   11934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:19.913323   11934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:19.914743   11934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:56:19.910860   11934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:19.911393   11934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:19.912883   11934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:19.913323   11934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:19.914743   11934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:56:19.918456 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:56:19.918468 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:56:19.948986 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:56:19.949022 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:56:19.983513 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:56:19.983543 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:56:20.044570 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:56:20.044611 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:56:22.561138 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:56:22.571631 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:56:22.571701 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:56:22.597481 1437114 cri.go:89] found id: ""
	I1209 05:56:22.597507 1437114 logs.go:282] 0 containers: []
	W1209 05:56:22.597516 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:56:22.597522 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:56:22.597583 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:56:22.620910 1437114 cri.go:89] found id: ""
	I1209 05:56:22.620934 1437114 logs.go:282] 0 containers: []
	W1209 05:56:22.620942 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:56:22.620948 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:56:22.621010 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:56:22.645762 1437114 cri.go:89] found id: ""
	I1209 05:56:22.645786 1437114 logs.go:282] 0 containers: []
	W1209 05:56:22.645794 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:56:22.645802 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:56:22.645860 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:56:22.674030 1437114 cri.go:89] found id: ""
	I1209 05:56:22.674055 1437114 logs.go:282] 0 containers: []
	W1209 05:56:22.674063 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:56:22.674069 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:56:22.674129 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:56:22.697420 1437114 cri.go:89] found id: ""
	I1209 05:56:22.697483 1437114 logs.go:282] 0 containers: []
	W1209 05:56:22.697498 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:56:22.697505 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:56:22.697572 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:56:22.721275 1437114 cri.go:89] found id: ""
	I1209 05:56:22.721303 1437114 logs.go:282] 0 containers: []
	W1209 05:56:22.721311 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:56:22.721318 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:56:22.721375 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:56:22.751174 1437114 cri.go:89] found id: ""
	I1209 05:56:22.751207 1437114 logs.go:282] 0 containers: []
	W1209 05:56:22.751216 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:56:22.751223 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:56:22.751297 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:56:22.783334 1437114 cri.go:89] found id: ""
	I1209 05:56:22.783359 1437114 logs.go:282] 0 containers: []
	W1209 05:56:22.783368 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:56:22.783377 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:56:22.783388 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:56:22.798903 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:56:22.798931 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:56:22.863930 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:56:22.855168   12050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:22.855903   12050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:22.857473   12050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:22.858541   12050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:22.859308   12050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:56:22.855168   12050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:22.855903   12050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:22.857473   12050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:22.858541   12050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:22.859308   12050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:56:22.863951 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:56:22.863964 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:56:22.889010 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:56:22.889044 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:56:22.917472 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:56:22.917497 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:56:25.477751 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:56:25.488155 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:56:25.488227 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:56:25.513691 1437114 cri.go:89] found id: ""
	I1209 05:56:25.513726 1437114 logs.go:282] 0 containers: []
	W1209 05:56:25.513735 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:56:25.513742 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:56:25.513815 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:56:25.538394 1437114 cri.go:89] found id: ""
	I1209 05:56:25.538426 1437114 logs.go:282] 0 containers: []
	W1209 05:56:25.538434 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:56:25.538441 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:56:25.538507 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:56:25.565992 1437114 cri.go:89] found id: ""
	I1209 05:56:25.566014 1437114 logs.go:282] 0 containers: []
	W1209 05:56:25.566023 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:56:25.566028 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:56:25.566084 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:56:25.594238 1437114 cri.go:89] found id: ""
	I1209 05:56:25.594273 1437114 logs.go:282] 0 containers: []
	W1209 05:56:25.594283 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:56:25.594289 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:56:25.594357 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:56:25.618528 1437114 cri.go:89] found id: ""
	I1209 05:56:25.618554 1437114 logs.go:282] 0 containers: []
	W1209 05:56:25.618562 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:56:25.618569 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:56:25.618630 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:56:25.645761 1437114 cri.go:89] found id: ""
	I1209 05:56:25.645793 1437114 logs.go:282] 0 containers: []
	W1209 05:56:25.645802 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:56:25.645809 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:56:25.645868 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:56:25.673275 1437114 cri.go:89] found id: ""
	I1209 05:56:25.673303 1437114 logs.go:282] 0 containers: []
	W1209 05:56:25.673313 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:56:25.673320 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:56:25.673378 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:56:25.698776 1437114 cri.go:89] found id: ""
	I1209 05:56:25.698801 1437114 logs.go:282] 0 containers: []
	W1209 05:56:25.698810 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:56:25.698819 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:56:25.698831 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:56:25.758726 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:56:25.758763 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:56:25.774459 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:56:25.774498 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:56:25.837634 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:56:25.829791   12165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:25.830357   12165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:25.831894   12165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:25.832310   12165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:25.833747   12165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:56:25.829791   12165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:25.830357   12165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:25.831894   12165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:25.832310   12165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:25.833747   12165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:56:25.837654 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:56:25.837666 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:56:25.863059 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:56:25.863089 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:56:28.390209 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:56:28.400783 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:56:28.400858 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:56:28.431158 1437114 cri.go:89] found id: ""
	I1209 05:56:28.431186 1437114 logs.go:282] 0 containers: []
	W1209 05:56:28.431195 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:56:28.431201 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:56:28.431257 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:56:28.466252 1437114 cri.go:89] found id: ""
	I1209 05:56:28.466304 1437114 logs.go:282] 0 containers: []
	W1209 05:56:28.466313 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:56:28.466319 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:56:28.466387 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:56:28.495101 1437114 cri.go:89] found id: ""
	I1209 05:56:28.495128 1437114 logs.go:282] 0 containers: []
	W1209 05:56:28.495135 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:56:28.495141 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:56:28.495205 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:56:28.519814 1437114 cri.go:89] found id: ""
	I1209 05:56:28.519840 1437114 logs.go:282] 0 containers: []
	W1209 05:56:28.519848 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:56:28.519854 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:56:28.519917 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:56:28.545987 1437114 cri.go:89] found id: ""
	I1209 05:56:28.546014 1437114 logs.go:282] 0 containers: []
	W1209 05:56:28.546022 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:56:28.546029 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:56:28.546087 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:56:28.569653 1437114 cri.go:89] found id: ""
	I1209 05:56:28.569677 1437114 logs.go:282] 0 containers: []
	W1209 05:56:28.569686 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:56:28.569693 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:56:28.569750 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:56:28.592506 1437114 cri.go:89] found id: ""
	I1209 05:56:28.592531 1437114 logs.go:282] 0 containers: []
	W1209 05:56:28.592540 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:56:28.592546 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:56:28.592603 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:56:28.616082 1437114 cri.go:89] found id: ""
	I1209 05:56:28.616109 1437114 logs.go:282] 0 containers: []
	W1209 05:56:28.616118 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:56:28.616127 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:56:28.616140 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:56:28.641671 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:56:28.641702 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:56:28.667950 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:56:28.667976 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:56:28.723545 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:56:28.723579 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:56:28.739105 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:56:28.739133 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:56:28.799453 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:56:28.791383   12291 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:28.792152   12291 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:28.793337   12291 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:28.793895   12291 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:28.795399   12291 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:56:28.791383   12291 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:28.792152   12291 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:28.793337   12291 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:28.793895   12291 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:28.795399   12291 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:56:31.300174 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:56:31.310601 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:56:31.310671 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:56:31.335264 1437114 cri.go:89] found id: ""
	I1209 05:56:31.335286 1437114 logs.go:282] 0 containers: []
	W1209 05:56:31.335295 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:56:31.335301 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:56:31.335359 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:56:31.359354 1437114 cri.go:89] found id: ""
	I1209 05:56:31.359377 1437114 logs.go:282] 0 containers: []
	W1209 05:56:31.359386 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:56:31.359392 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:56:31.359451 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:56:31.385360 1437114 cri.go:89] found id: ""
	I1209 05:56:31.385383 1437114 logs.go:282] 0 containers: []
	W1209 05:56:31.385392 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:56:31.385398 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:56:31.385463 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:56:31.410224 1437114 cri.go:89] found id: ""
	I1209 05:56:31.410250 1437114 logs.go:282] 0 containers: []
	W1209 05:56:31.410258 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:56:31.410265 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:56:31.410359 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:56:31.451992 1437114 cri.go:89] found id: ""
	I1209 05:56:31.452040 1437114 logs.go:282] 0 containers: []
	W1209 05:56:31.452049 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:56:31.452056 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:56:31.452116 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:56:31.484950 1437114 cri.go:89] found id: ""
	I1209 05:56:31.484979 1437114 logs.go:282] 0 containers: []
	W1209 05:56:31.484987 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:56:31.484994 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:56:31.485052 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:56:31.518900 1437114 cri.go:89] found id: ""
	I1209 05:56:31.518929 1437114 logs.go:282] 0 containers: []
	W1209 05:56:31.518938 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:56:31.518944 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:56:31.519004 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:56:31.542368 1437114 cri.go:89] found id: ""
	I1209 05:56:31.542398 1437114 logs.go:282] 0 containers: []
	W1209 05:56:31.542406 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:56:31.542414 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:56:31.542426 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:56:31.597391 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:56:31.597426 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:56:31.613542 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:56:31.613568 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:56:31.675768 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:56:31.667793   12388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:31.668366   12388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:31.670085   12388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:31.670523   12388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:31.672049   12388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:56:31.667793   12388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:31.668366   12388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:31.670085   12388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:31.670523   12388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:31.672049   12388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:56:31.675790 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:56:31.675801 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:56:31.705823 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:56:31.705860 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:56:34.233697 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:56:34.244491 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:56:34.244562 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:56:34.269357 1437114 cri.go:89] found id: ""
	I1209 05:56:34.269382 1437114 logs.go:282] 0 containers: []
	W1209 05:56:34.269393 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:56:34.269399 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:56:34.269455 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:56:34.298358 1437114 cri.go:89] found id: ""
	I1209 05:56:34.298389 1437114 logs.go:282] 0 containers: []
	W1209 05:56:34.298398 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:56:34.298404 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:56:34.298463 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:56:34.323280 1437114 cri.go:89] found id: ""
	I1209 05:56:34.323301 1437114 logs.go:282] 0 containers: []
	W1209 05:56:34.323309 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:56:34.323315 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:56:34.323372 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:56:34.347068 1437114 cri.go:89] found id: ""
	I1209 05:56:34.347144 1437114 logs.go:282] 0 containers: []
	W1209 05:56:34.347166 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:56:34.347185 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:56:34.347268 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:56:34.370494 1437114 cri.go:89] found id: ""
	I1209 05:56:34.370519 1437114 logs.go:282] 0 containers: []
	W1209 05:56:34.370528 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:56:34.370534 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:56:34.370593 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:56:34.394561 1437114 cri.go:89] found id: ""
	I1209 05:56:34.394586 1437114 logs.go:282] 0 containers: []
	W1209 05:56:34.394594 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:56:34.394601 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:56:34.394665 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:56:34.418680 1437114 cri.go:89] found id: ""
	I1209 05:56:34.418708 1437114 logs.go:282] 0 containers: []
	W1209 05:56:34.418717 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:56:34.418723 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:56:34.418781 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:56:34.456783 1437114 cri.go:89] found id: ""
	I1209 05:56:34.456811 1437114 logs.go:282] 0 containers: []
	W1209 05:56:34.456819 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:56:34.456828 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:56:34.456839 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:56:34.520119 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:56:34.520160 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:56:34.536245 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:56:34.536271 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:56:34.598782 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:56:34.590200   12497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:34.590688   12497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:34.592324   12497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:34.592957   12497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:34.593923   12497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:56:34.590200   12497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:34.590688   12497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:34.592324   12497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:34.592957   12497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:34.593923   12497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:56:34.598802 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:56:34.598813 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:56:34.623426 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:56:34.623456 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:56:37.156294 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:56:37.167303 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:56:37.167376 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:56:37.213639 1437114 cri.go:89] found id: ""
	I1209 05:56:37.213661 1437114 logs.go:282] 0 containers: []
	W1209 05:56:37.213670 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:56:37.213676 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:56:37.213734 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:56:37.251381 1437114 cri.go:89] found id: ""
	I1209 05:56:37.251451 1437114 logs.go:282] 0 containers: []
	W1209 05:56:37.251472 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:56:37.251489 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:56:37.251577 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:56:37.276652 1437114 cri.go:89] found id: ""
	I1209 05:56:37.276683 1437114 logs.go:282] 0 containers: []
	W1209 05:56:37.276718 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:56:37.276730 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:56:37.276807 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:56:37.306291 1437114 cri.go:89] found id: ""
	I1209 05:56:37.306355 1437114 logs.go:282] 0 containers: []
	W1209 05:56:37.306378 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:56:37.306397 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:56:37.306480 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:56:37.330690 1437114 cri.go:89] found id: ""
	I1209 05:56:37.330761 1437114 logs.go:282] 0 containers: []
	W1209 05:56:37.330784 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:56:37.330803 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:56:37.330891 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:56:37.360974 1437114 cri.go:89] found id: ""
	I1209 05:56:37.360996 1437114 logs.go:282] 0 containers: []
	W1209 05:56:37.361005 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:56:37.361011 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:56:37.361067 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:56:37.385070 1437114 cri.go:89] found id: ""
	I1209 05:56:37.385134 1437114 logs.go:282] 0 containers: []
	W1209 05:56:37.385149 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:56:37.385157 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:56:37.385214 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:56:37.408838 1437114 cri.go:89] found id: ""
	I1209 05:56:37.408872 1437114 logs.go:282] 0 containers: []
	W1209 05:56:37.408881 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:56:37.408890 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:56:37.408904 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:56:37.470471 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:56:37.470552 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:56:37.490560 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:56:37.490636 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:56:37.566595 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:56:37.558458   12607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:37.559098   12607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:37.560793   12607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:37.561249   12607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:37.562761   12607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:56:37.558458   12607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:37.559098   12607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:37.560793   12607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:37.561249   12607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:37.562761   12607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:56:37.566616 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:56:37.566629 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:56:37.591926 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:56:37.591966 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:56:40.120818 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:56:40.132357 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:56:40.132434 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:56:40.159057 1437114 cri.go:89] found id: ""
	I1209 05:56:40.159127 1437114 logs.go:282] 0 containers: []
	W1209 05:56:40.159150 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:56:40.159172 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:56:40.159260 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:56:40.194739 1437114 cri.go:89] found id: ""
	I1209 05:56:40.194762 1437114 logs.go:282] 0 containers: []
	W1209 05:56:40.194770 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:56:40.194777 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:56:40.194842 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:56:40.229613 1437114 cri.go:89] found id: ""
	I1209 05:56:40.229642 1437114 logs.go:282] 0 containers: []
	W1209 05:56:40.229651 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:56:40.229657 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:56:40.229720 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:56:40.266599 1437114 cri.go:89] found id: ""
	I1209 05:56:40.266622 1437114 logs.go:282] 0 containers: []
	W1209 05:56:40.266631 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:56:40.266643 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:56:40.266705 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:56:40.293941 1437114 cri.go:89] found id: ""
	I1209 05:56:40.293964 1437114 logs.go:282] 0 containers: []
	W1209 05:56:40.293973 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:56:40.293979 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:56:40.294037 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:56:40.319374 1437114 cri.go:89] found id: ""
	I1209 05:56:40.319407 1437114 logs.go:282] 0 containers: []
	W1209 05:56:40.319416 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:56:40.319423 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:56:40.319497 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:56:40.344221 1437114 cri.go:89] found id: ""
	I1209 05:56:40.344254 1437114 logs.go:282] 0 containers: []
	W1209 05:56:40.344263 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:56:40.344268 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:56:40.344333 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:56:40.369033 1437114 cri.go:89] found id: ""
	I1209 05:56:40.369056 1437114 logs.go:282] 0 containers: []
	W1209 05:56:40.369066 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:56:40.369076 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:56:40.369088 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:56:40.398480 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:56:40.398506 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:56:40.454913 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:56:40.454992 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:56:40.471549 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:56:40.471617 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:56:40.537419 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:56:40.529111   12732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:40.529745   12732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:40.531419   12732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:40.532052   12732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:40.533493   12732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:56:40.529111   12732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:40.529745   12732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:40.531419   12732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:40.532052   12732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:40.533493   12732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:56:40.537440 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:56:40.537452 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:56:43.063560 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:56:43.074056 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:56:43.074128 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:56:43.098443 1437114 cri.go:89] found id: ""
	I1209 05:56:43.098467 1437114 logs.go:282] 0 containers: []
	W1209 05:56:43.098476 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:56:43.098483 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:56:43.098543 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:56:43.123378 1437114 cri.go:89] found id: ""
	I1209 05:56:43.123405 1437114 logs.go:282] 0 containers: []
	W1209 05:56:43.123414 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:56:43.123420 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:56:43.123483 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:56:43.152283 1437114 cri.go:89] found id: ""
	I1209 05:56:43.152313 1437114 logs.go:282] 0 containers: []
	W1209 05:56:43.152322 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:56:43.152329 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:56:43.152389 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:56:43.176720 1437114 cri.go:89] found id: ""
	I1209 05:56:43.176744 1437114 logs.go:282] 0 containers: []
	W1209 05:56:43.176752 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:56:43.176759 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:56:43.176816 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:56:43.202038 1437114 cri.go:89] found id: ""
	I1209 05:56:43.202066 1437114 logs.go:282] 0 containers: []
	W1209 05:56:43.202074 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:56:43.202081 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:56:43.202136 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:56:43.231595 1437114 cri.go:89] found id: ""
	I1209 05:56:43.231620 1437114 logs.go:282] 0 containers: []
	W1209 05:56:43.231629 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:56:43.231636 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:56:43.231693 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:56:43.261330 1437114 cri.go:89] found id: ""
	I1209 05:56:43.261351 1437114 logs.go:282] 0 containers: []
	W1209 05:56:43.261359 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:56:43.261365 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:56:43.261422 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:56:43.290154 1437114 cri.go:89] found id: ""
	I1209 05:56:43.290175 1437114 logs.go:282] 0 containers: []
	W1209 05:56:43.290183 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:56:43.290192 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:56:43.290204 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:56:43.318398 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:56:43.318424 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:56:43.377076 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:56:43.377112 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:56:43.392846 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:56:43.392877 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:56:43.468351 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:56:43.457690   12848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:43.458459   12848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:43.460248   12848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:43.460927   12848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:43.462463   12848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:56:43.457690   12848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:43.458459   12848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:43.460248   12848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:43.460927   12848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:43.462463   12848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:56:43.468373 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:56:43.468384 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:56:46.000301 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:56:46.013622 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:56:46.013695 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:56:46.043040 1437114 cri.go:89] found id: ""
	I1209 05:56:46.043066 1437114 logs.go:282] 0 containers: []
	W1209 05:56:46.043074 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:56:46.043081 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:56:46.043164 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:56:46.073486 1437114 cri.go:89] found id: ""
	I1209 05:56:46.073512 1437114 logs.go:282] 0 containers: []
	W1209 05:56:46.073521 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:56:46.073529 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:56:46.073593 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:56:46.099148 1437114 cri.go:89] found id: ""
	I1209 05:56:46.099175 1437114 logs.go:282] 0 containers: []
	W1209 05:56:46.099185 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:56:46.099193 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:56:46.099252 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:56:46.123167 1437114 cri.go:89] found id: ""
	I1209 05:56:46.123191 1437114 logs.go:282] 0 containers: []
	W1209 05:56:46.123200 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:56:46.123207 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:56:46.123271 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:56:46.151973 1437114 cri.go:89] found id: ""
	I1209 05:56:46.151999 1437114 logs.go:282] 0 containers: []
	W1209 05:56:46.152008 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:56:46.152035 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:56:46.152098 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:56:46.177766 1437114 cri.go:89] found id: ""
	I1209 05:56:46.177798 1437114 logs.go:282] 0 containers: []
	W1209 05:56:46.177807 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:56:46.177813 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:56:46.177871 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:56:46.206986 1437114 cri.go:89] found id: ""
	I1209 05:56:46.207008 1437114 logs.go:282] 0 containers: []
	W1209 05:56:46.207017 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:56:46.207023 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:56:46.207081 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:56:46.233946 1437114 cri.go:89] found id: ""
	I1209 05:56:46.233968 1437114 logs.go:282] 0 containers: []
	W1209 05:56:46.233977 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:56:46.233986 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:56:46.233997 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:56:46.298127 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:56:46.289387   12939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:46.289949   12939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:46.291474   12939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:46.292041   12939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:46.293829   12939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:56:46.289387   12939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:46.289949   12939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:46.291474   12939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:46.292041   12939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:46.293829   12939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:56:46.298150 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:56:46.298162 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:56:46.323208 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:56:46.323239 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:56:46.355077 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:56:46.355106 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:56:46.410415 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:56:46.410452 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:56:48.926721 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:56:48.937257 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:56:48.937332 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:56:48.961648 1437114 cri.go:89] found id: ""
	I1209 05:56:48.961676 1437114 logs.go:282] 0 containers: []
	W1209 05:56:48.961685 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:56:48.961698 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:56:48.961758 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:56:48.989144 1437114 cri.go:89] found id: ""
	I1209 05:56:48.989169 1437114 logs.go:282] 0 containers: []
	W1209 05:56:48.989178 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:56:48.989184 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:56:48.989240 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:56:49.014588 1437114 cri.go:89] found id: ""
	I1209 05:56:49.014613 1437114 logs.go:282] 0 containers: []
	W1209 05:56:49.014622 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:56:49.014628 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:56:49.014691 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:56:49.038311 1437114 cri.go:89] found id: ""
	I1209 05:56:49.038339 1437114 logs.go:282] 0 containers: []
	W1209 05:56:49.038349 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:56:49.038355 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:56:49.038414 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:56:49.062714 1437114 cri.go:89] found id: ""
	I1209 05:56:49.062740 1437114 logs.go:282] 0 containers: []
	W1209 05:56:49.062748 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:56:49.062754 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:56:49.062814 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:56:49.089769 1437114 cri.go:89] found id: ""
	I1209 05:56:49.089798 1437114 logs.go:282] 0 containers: []
	W1209 05:56:49.089807 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:56:49.089815 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:56:49.089892 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:56:49.118456 1437114 cri.go:89] found id: ""
	I1209 05:56:49.118477 1437114 logs.go:282] 0 containers: []
	W1209 05:56:49.118486 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:56:49.118492 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:56:49.118548 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:56:49.146213 1437114 cri.go:89] found id: ""
	I1209 05:56:49.146241 1437114 logs.go:282] 0 containers: []
	W1209 05:56:49.146260 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:56:49.146286 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:56:49.146304 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:56:49.171755 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:56:49.171792 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:56:49.210632 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:56:49.210700 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:56:49.274853 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:56:49.274890 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:56:49.290746 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:56:49.290774 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:56:49.352595 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:56:49.344509   13068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:49.345192   13068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:49.346929   13068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:49.347389   13068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:49.348821   13068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:56:49.344509   13068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:49.345192   13068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:49.346929   13068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:49.347389   13068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:49.348821   13068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:56:51.854276 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:56:51.864787 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:56:51.864868 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:56:51.888399 1437114 cri.go:89] found id: ""
	I1209 05:56:51.888422 1437114 logs.go:282] 0 containers: []
	W1209 05:56:51.888431 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:56:51.888437 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:56:51.888499 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:56:51.913838 1437114 cri.go:89] found id: ""
	I1209 05:56:51.913865 1437114 logs.go:282] 0 containers: []
	W1209 05:56:51.913873 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:56:51.913880 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:56:51.913961 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:56:51.938727 1437114 cri.go:89] found id: ""
	I1209 05:56:51.938768 1437114 logs.go:282] 0 containers: []
	W1209 05:56:51.938794 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:56:51.938811 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:56:51.938885 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:56:51.964549 1437114 cri.go:89] found id: ""
	I1209 05:56:51.964576 1437114 logs.go:282] 0 containers: []
	W1209 05:56:51.964584 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:56:51.964590 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:56:51.964689 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:56:51.988777 1437114 cri.go:89] found id: ""
	I1209 05:56:51.988806 1437114 logs.go:282] 0 containers: []
	W1209 05:56:51.988815 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:56:51.988821 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:56:51.988908 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:56:52.017110 1437114 cri.go:89] found id: ""
	I1209 05:56:52.017138 1437114 logs.go:282] 0 containers: []
	W1209 05:56:52.017147 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:56:52.017154 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:56:52.017219 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:56:52.043184 1437114 cri.go:89] found id: ""
	I1209 05:56:52.043211 1437114 logs.go:282] 0 containers: []
	W1209 05:56:52.043219 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:56:52.043225 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:56:52.043293 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:56:52.068591 1437114 cri.go:89] found id: ""
	I1209 05:56:52.068617 1437114 logs.go:282] 0 containers: []
	W1209 05:56:52.068626 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:56:52.068636 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:56:52.068652 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:56:52.135805 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:56:52.127242   13157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:52.127996   13157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:52.129698   13157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:52.130086   13157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:52.131645   13157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:56:52.127242   13157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:52.127996   13157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:52.129698   13157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:52.130086   13157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:52.131645   13157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:56:52.135824 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:56:52.135837 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:56:52.160848 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:56:52.160884 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:56:52.206902 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:56:52.206930 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:56:52.269206 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:56:52.269242 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:56:54.786534 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:56:54.796870 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:56:54.796942 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:56:54.820891 1437114 cri.go:89] found id: ""
	I1209 05:56:54.820912 1437114 logs.go:282] 0 containers: []
	W1209 05:56:54.820920 1437114 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:56:54.820926 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1209 05:56:54.820983 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:56:54.844219 1437114 cri.go:89] found id: ""
	I1209 05:56:54.844243 1437114 logs.go:282] 0 containers: []
	W1209 05:56:54.844251 1437114 logs.go:284] No container was found matching "etcd"
	I1209 05:56:54.844257 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1209 05:56:54.844314 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:56:54.867467 1437114 cri.go:89] found id: ""
	I1209 05:56:54.867540 1437114 logs.go:282] 0 containers: []
	W1209 05:56:54.867564 1437114 logs.go:284] No container was found matching "coredns"
	I1209 05:56:54.867585 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:56:54.867678 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:56:54.891985 1437114 cri.go:89] found id: ""
	I1209 05:56:54.892007 1437114 logs.go:282] 0 containers: []
	W1209 05:56:54.892053 1437114 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:56:54.892060 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:56:54.892135 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:56:54.915079 1437114 cri.go:89] found id: ""
	I1209 05:56:54.915104 1437114 logs.go:282] 0 containers: []
	W1209 05:56:54.915112 1437114 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:56:54.915119 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:56:54.915175 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:56:54.941729 1437114 cri.go:89] found id: ""
	I1209 05:56:54.941768 1437114 logs.go:282] 0 containers: []
	W1209 05:56:54.941776 1437114 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:56:54.941783 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1209 05:56:54.941840 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:56:54.970033 1437114 cri.go:89] found id: ""
	I1209 05:56:54.970058 1437114 logs.go:282] 0 containers: []
	W1209 05:56:54.970066 1437114 logs.go:284] No container was found matching "kindnet"
	I1209 05:56:54.970072 1437114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 05:56:54.970134 1437114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 05:56:55.004188 1437114 cri.go:89] found id: ""
	I1209 05:56:55.004230 1437114 logs.go:282] 0 containers: []
	W1209 05:56:55.004240 1437114 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 05:56:55.004250 1437114 logs.go:123] Gathering logs for container status ...
	I1209 05:56:55.004264 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:56:55.034996 1437114 logs.go:123] Gathering logs for kubelet ...
	I1209 05:56:55.035025 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:56:55.091574 1437114 logs.go:123] Gathering logs for dmesg ...
	I1209 05:56:55.091610 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:56:55.108302 1437114 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:56:55.108331 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:56:55.172944 1437114 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:56:55.163616   13287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:55.164399   13287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:55.166155   13287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:55.166466   13287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:55.168546   13287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 05:56:55.163616   13287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:55.164399   13287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:55.166155   13287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:55.166466   13287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:56:55.168546   13287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:56:55.172964 1437114 logs.go:123] Gathering logs for containerd ...
	I1209 05:56:55.172985 1437114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1209 05:56:57.700005 1437114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:56:57.714279 1437114 out.go:203] 
	W1209 05:56:57.717113 1437114 out.go:285] X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1209 05:56:57.717154 1437114 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1209 05:56:57.717169 1437114 out.go:285] * Related issues:
	W1209 05:56:57.717186 1437114 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	W1209 05:56:57.717204 1437114 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	I1209 05:56:57.720208 1437114 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 09 05:50:54 newest-cni-262540 containerd[557]: time="2025-12-09T05:50:54.140457949Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 09 05:50:54 newest-cni-262540 containerd[557]: time="2025-12-09T05:50:54.140531030Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 09 05:50:54 newest-cni-262540 containerd[557]: time="2025-12-09T05:50:54.140633493Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 09 05:50:54 newest-cni-262540 containerd[557]: time="2025-12-09T05:50:54.140719603Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 09 05:50:54 newest-cni-262540 containerd[557]: time="2025-12-09T05:50:54.140780722Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 09 05:50:54 newest-cni-262540 containerd[557]: time="2025-12-09T05:50:54.140839280Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 09 05:50:54 newest-cni-262540 containerd[557]: time="2025-12-09T05:50:54.140899447Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 09 05:50:54 newest-cni-262540 containerd[557]: time="2025-12-09T05:50:54.140960081Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 09 05:50:54 newest-cni-262540 containerd[557]: time="2025-12-09T05:50:54.141027665Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 09 05:50:54 newest-cni-262540 containerd[557]: time="2025-12-09T05:50:54.141111133Z" level=info msg="Connect containerd service"
	Dec 09 05:50:54 newest-cni-262540 containerd[557]: time="2025-12-09T05:50:54.141485580Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 09 05:50:54 newest-cni-262540 containerd[557]: time="2025-12-09T05:50:54.142145599Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 09 05:50:54 newest-cni-262540 containerd[557]: time="2025-12-09T05:50:54.154449407Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 09 05:50:54 newest-cni-262540 containerd[557]: time="2025-12-09T05:50:54.154518566Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 09 05:50:54 newest-cni-262540 containerd[557]: time="2025-12-09T05:50:54.154573474Z" level=info msg="Start subscribing containerd event"
	Dec 09 05:50:54 newest-cni-262540 containerd[557]: time="2025-12-09T05:50:54.154621735Z" level=info msg="Start recovering state"
	Dec 09 05:50:54 newest-cni-262540 containerd[557]: time="2025-12-09T05:50:54.192831791Z" level=info msg="Start event monitor"
	Dec 09 05:50:54 newest-cni-262540 containerd[557]: time="2025-12-09T05:50:54.193022399Z" level=info msg="Start cni network conf syncer for default"
	Dec 09 05:50:54 newest-cni-262540 containerd[557]: time="2025-12-09T05:50:54.193095554Z" level=info msg="Start streaming server"
	Dec 09 05:50:54 newest-cni-262540 containerd[557]: time="2025-12-09T05:50:54.193158043Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 09 05:50:54 newest-cni-262540 containerd[557]: time="2025-12-09T05:50:54.193246959Z" level=info msg="runtime interface starting up..."
	Dec 09 05:50:54 newest-cni-262540 containerd[557]: time="2025-12-09T05:50:54.193315946Z" level=info msg="starting plugins..."
	Dec 09 05:50:54 newest-cni-262540 containerd[557]: time="2025-12-09T05:50:54.193399907Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 09 05:50:54 newest-cni-262540 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 09 05:50:54 newest-cni-262540 containerd[557]: time="2025-12-09T05:50:54.195297741Z" level=info msg="containerd successfully booted in 0.080443s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:57:10.512458   13934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:57:10.513193   13934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:57:10.514840   13934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:57:10.515454   13934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 05:57:10.517064   13934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +25.904254] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:14] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:16] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:18] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:19] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:30] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:32] overlayfs: idmapped layers are currently not supported
	[ +28.114653] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:33] overlayfs: idmapped layers are currently not supported
	[ +23.720849] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:34] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:35] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:36] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:37] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:38] overlayfs: idmapped layers are currently not supported
	[ +23.656275] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:39] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:57] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:58] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:00] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:02] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:03] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:05] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:06] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 9 05:31] overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	
	
	==> kernel <==
	 05:57:10 up  8:39,  0 user,  load average: 1.12, 0.76, 1.11
	Linux newest-cni-262540 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 09 05:57:07 newest-cni-262540 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 09 05:57:07 newest-cni-262540 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4.
	Dec 09 05:57:07 newest-cni-262540 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 05:57:07 newest-cni-262540 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 05:57:07 newest-cni-262540 kubelet[13793]: E1209 05:57:07.746388   13793 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 09 05:57:07 newest-cni-262540 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 09 05:57:07 newest-cni-262540 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 09 05:57:08 newest-cni-262540 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5.
	Dec 09 05:57:08 newest-cni-262540 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 05:57:08 newest-cni-262540 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 05:57:08 newest-cni-262540 kubelet[13799]: E1209 05:57:08.499036   13799 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 09 05:57:08 newest-cni-262540 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 09 05:57:08 newest-cni-262540 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 09 05:57:09 newest-cni-262540 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6.
	Dec 09 05:57:09 newest-cni-262540 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 05:57:09 newest-cni-262540 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 05:57:09 newest-cni-262540 kubelet[13835]: E1209 05:57:09.240265   13835 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 09 05:57:09 newest-cni-262540 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 09 05:57:09 newest-cni-262540 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 09 05:57:09 newest-cni-262540 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7.
	Dec 09 05:57:09 newest-cni-262540 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 05:57:09 newest-cni-262540 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 05:57:09 newest-cni-262540 kubelet[13840]: E1209 05:57:09.997973   13840 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 09 05:57:10 newest-cni-262540 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 09 05:57:10 newest-cni-262540 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-262540 -n newest-cni-262540
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-262540 -n newest-cni-262540: exit status 2 (355.182345ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "newest-cni-262540" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (9.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (279.9s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1209 06:00:32.362106 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/default-k8s-diff-port-564611/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1209 06:00:32.751412 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1209 06:01:06.731936 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/addons-221952/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1209 06:02:22.069152 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-717497/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1209 06:02:38.985599 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-717497/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1209 06:03:26.932288 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/old-k8s-version-384009/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1209 06:03:35.732467 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/auto-132757/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 06:03:35.739226 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/auto-132757/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 06:03:35.752002 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/auto-132757/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 06:03:35.773432 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/auto-132757/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 06:03:35.814807 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/auto-132757/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 06:03:35.896171 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/auto-132757/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 06:03:36.057424 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/auto-132757/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 06:03:36.379832 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/auto-132757/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1209 06:03:37.021928 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/auto-132757/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1209 06:03:38.304390 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/auto-132757/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1209 06:03:40.866906 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/auto-132757/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1209 06:03:45.989012 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/auto-132757/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1209 06:03:56.230387 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/auto-132757/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1209 06:04:09.296876 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/default-k8s-diff-port-564611/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1209 06:04:16.711715 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/auto-132757/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1209 06:04:53.742352 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/kindnet-132757/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 06:04:53.748647 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/kindnet-132757/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 06:04:53.760073 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/kindnet-132757/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 06:04:53.781535 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/kindnet-132757/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 06:04:53.823800 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/kindnet-132757/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1209 06:04:55.032725 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/kindnet-132757/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1209 06:04:56.314176 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/kindnet-132757/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1209 06:04:57.673601 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/auto-132757/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1209 06:04:58.876446 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/kindnet-132757/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
start_stop_delete_test.go:285: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:285: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-842269 -n no-preload-842269
start_stop_delete_test.go:285: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-842269 -n no-preload-842269: exit status 2 (365.020169ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:285: status error: exit status 2 (may be ok)
start_stop_delete_test.go:285: "no-preload-842269" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:286: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-842269 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:289: (dbg) Non-zero exit: kubectl --context no-preload-842269 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.633µs)
start_stop_delete_test.go:291: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-842269 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:295: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-842269
helpers_test.go:243: (dbg) docker inspect no-preload-842269:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9789b34a5453b154c4ceca4f0038c1d7948d7f0f72f334a114a8452d803ad415",
	        "Created": "2025-12-09T05:35:10.617601088Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1429985,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-09T05:45:19.572205739Z",
	            "FinishedAt": "2025-12-09T05:45:18.233836564Z"
	        },
	        "Image": "sha256:e4eb91ed18a24161fce60c7cdd660144ecd5b8c5029dc2dea2c5e423c2f48ce4",
	        "ResolvConfPath": "/var/lib/docker/containers/9789b34a5453b154c4ceca4f0038c1d7948d7f0f72f334a114a8452d803ad415/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9789b34a5453b154c4ceca4f0038c1d7948d7f0f72f334a114a8452d803ad415/hostname",
	        "HostsPath": "/var/lib/docker/containers/9789b34a5453b154c4ceca4f0038c1d7948d7f0f72f334a114a8452d803ad415/hosts",
	        "LogPath": "/var/lib/docker/containers/9789b34a5453b154c4ceca4f0038c1d7948d7f0f72f334a114a8452d803ad415/9789b34a5453b154c4ceca4f0038c1d7948d7f0f72f334a114a8452d803ad415-json.log",
	        "Name": "/no-preload-842269",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-842269:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-842269",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "9789b34a5453b154c4ceca4f0038c1d7948d7f0f72f334a114a8452d803ad415",
	                "LowerDir": "/var/lib/docker/overlay2/658f95b1f330fcb0d4774e9611c50218d409d0a889583e0050325b4fe479e9f7-init/diff:/var/lib/docker/overlay2/c44bb57aa59cc265266f37f2bb6e7ec0e7d641c3b4aeaa57e6d23deec6f0d1d4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/658f95b1f330fcb0d4774e9611c50218d409d0a889583e0050325b4fe479e9f7/merged",
	                "UpperDir": "/var/lib/docker/overlay2/658f95b1f330fcb0d4774e9611c50218d409d0a889583e0050325b4fe479e9f7/diff",
	                "WorkDir": "/var/lib/docker/overlay2/658f95b1f330fcb0d4774e9611c50218d409d0a889583e0050325b4fe479e9f7/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-842269",
	                "Source": "/var/lib/docker/volumes/no-preload-842269/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-842269",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-842269",
	                "name.minikube.sigs.k8s.io": "no-preload-842269",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7fcd619b0c6697c145e92186b02d3f8b52fc0617bc693eecdb3992bd01dd5379",
	            "SandboxKey": "/var/run/docker/netns/7fcd619b0c66",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34210"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34211"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34214"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34212"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34213"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-842269": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "6e:db:fc:0d:87:5a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6461bd7226e5723487f325bf78054dc63f1dafa2831abe7b44a8cc288dfa4456",
	                    "EndpointID": "26ea729d3df39a6ce095a6c0877cc7989e68004132accb6fb25a8d1686357af6",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-842269",
	                        "9789b34a5453"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-842269 -n no-preload-842269
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-842269 -n no-preload-842269: exit status 2 (433.525588ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-842269 logs -n 25
helpers_test.go:260: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                 ARGS                                                                                  │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p custom-flannel-132757 sudo systemctl cat kubelet --no-pager                                                                                                        │ custom-flannel-132757     │ jenkins │ v1.37.0 │ 09 Dec 25 06:03 UTC │ 09 Dec 25 06:03 UTC │
	│ ssh     │ -p custom-flannel-132757 sudo journalctl -xeu kubelet --all --full --no-pager                                                                                         │ custom-flannel-132757     │ jenkins │ v1.37.0 │ 09 Dec 25 06:03 UTC │ 09 Dec 25 06:03 UTC │
	│ ssh     │ -p custom-flannel-132757 sudo cat /etc/kubernetes/kubelet.conf                                                                                                        │ custom-flannel-132757     │ jenkins │ v1.37.0 │ 09 Dec 25 06:03 UTC │ 09 Dec 25 06:03 UTC │
	│ ssh     │ -p custom-flannel-132757 sudo cat /var/lib/kubelet/config.yaml                                                                                                        │ custom-flannel-132757     │ jenkins │ v1.37.0 │ 09 Dec 25 06:03 UTC │ 09 Dec 25 06:03 UTC │
	│ ssh     │ -p custom-flannel-132757 sudo systemctl status docker --all --full --no-pager                                                                                         │ custom-flannel-132757     │ jenkins │ v1.37.0 │ 09 Dec 25 06:03 UTC │                     │
	│ ssh     │ -p custom-flannel-132757 sudo systemctl cat docker --no-pager                                                                                                         │ custom-flannel-132757     │ jenkins │ v1.37.0 │ 09 Dec 25 06:03 UTC │ 09 Dec 25 06:03 UTC │
	│ ssh     │ -p custom-flannel-132757 sudo cat /etc/docker/daemon.json                                                                                                             │ custom-flannel-132757     │ jenkins │ v1.37.0 │ 09 Dec 25 06:03 UTC │                     │
	│ ssh     │ -p custom-flannel-132757 sudo docker system info                                                                                                                      │ custom-flannel-132757     │ jenkins │ v1.37.0 │ 09 Dec 25 06:03 UTC │                     │
	│ ssh     │ -p custom-flannel-132757 sudo systemctl status cri-docker --all --full --no-pager                                                                                     │ custom-flannel-132757     │ jenkins │ v1.37.0 │ 09 Dec 25 06:03 UTC │                     │
	│ ssh     │ -p custom-flannel-132757 sudo systemctl cat cri-docker --no-pager                                                                                                     │ custom-flannel-132757     │ jenkins │ v1.37.0 │ 09 Dec 25 06:03 UTC │ 09 Dec 25 06:03 UTC │
	│ ssh     │ -p custom-flannel-132757 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                │ custom-flannel-132757     │ jenkins │ v1.37.0 │ 09 Dec 25 06:03 UTC │                     │
	│ ssh     │ -p custom-flannel-132757 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                          │ custom-flannel-132757     │ jenkins │ v1.37.0 │ 09 Dec 25 06:03 UTC │ 09 Dec 25 06:03 UTC │
	│ ssh     │ -p custom-flannel-132757 sudo cri-dockerd --version                                                                                                                   │ custom-flannel-132757     │ jenkins │ v1.37.0 │ 09 Dec 25 06:03 UTC │ 09 Dec 25 06:03 UTC │
	│ ssh     │ -p custom-flannel-132757 sudo systemctl status containerd --all --full --no-pager                                                                                     │ custom-flannel-132757     │ jenkins │ v1.37.0 │ 09 Dec 25 06:03 UTC │ 09 Dec 25 06:03 UTC │
	│ ssh     │ -p custom-flannel-132757 sudo systemctl cat containerd --no-pager                                                                                                     │ custom-flannel-132757     │ jenkins │ v1.37.0 │ 09 Dec 25 06:03 UTC │ 09 Dec 25 06:03 UTC │
	│ ssh     │ -p custom-flannel-132757 sudo cat /lib/systemd/system/containerd.service                                                                                              │ custom-flannel-132757     │ jenkins │ v1.37.0 │ 09 Dec 25 06:03 UTC │ 09 Dec 25 06:03 UTC │
	│ ssh     │ -p custom-flannel-132757 sudo cat /etc/containerd/config.toml                                                                                                         │ custom-flannel-132757     │ jenkins │ v1.37.0 │ 09 Dec 25 06:03 UTC │ 09 Dec 25 06:03 UTC │
	│ ssh     │ -p custom-flannel-132757 sudo containerd config dump                                                                                                                  │ custom-flannel-132757     │ jenkins │ v1.37.0 │ 09 Dec 25 06:03 UTC │ 09 Dec 25 06:03 UTC │
	│ ssh     │ -p custom-flannel-132757 sudo systemctl status crio --all --full --no-pager                                                                                           │ custom-flannel-132757     │ jenkins │ v1.37.0 │ 09 Dec 25 06:03 UTC │                     │
	│ ssh     │ -p custom-flannel-132757 sudo systemctl cat crio --no-pager                                                                                                           │ custom-flannel-132757     │ jenkins │ v1.37.0 │ 09 Dec 25 06:03 UTC │ 09 Dec 25 06:03 UTC │
	│ ssh     │ -p custom-flannel-132757 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                 │ custom-flannel-132757     │ jenkins │ v1.37.0 │ 09 Dec 25 06:03 UTC │ 09 Dec 25 06:03 UTC │
	│ ssh     │ -p custom-flannel-132757 sudo crio config                                                                                                                             │ custom-flannel-132757     │ jenkins │ v1.37.0 │ 09 Dec 25 06:03 UTC │ 09 Dec 25 06:03 UTC │
	│ delete  │ -p custom-flannel-132757                                                                                                                                              │ custom-flannel-132757     │ jenkins │ v1.37.0 │ 09 Dec 25 06:03 UTC │ 09 Dec 25 06:03 UTC │
	│ start   │ -p enable-default-cni-132757 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd │ enable-default-cni-132757 │ jenkins │ v1.37.0 │ 09 Dec 25 06:03 UTC │ 09 Dec 25 06:04 UTC │
	│ ssh     │ -p enable-default-cni-132757 pgrep -a kubelet                                                                                                                         │ enable-default-cni-132757 │ jenkins │ v1.37.0 │ 09 Dec 25 06:04 UTC │ 09 Dec 25 06:04 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/09 06:03:42
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 06:03:42.031098 1484132 out.go:360] Setting OutFile to fd 1 ...
	I1209 06:03:42.031252 1484132 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 06:03:42.031265 1484132 out.go:374] Setting ErrFile to fd 2...
	I1209 06:03:42.031270 1484132 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 06:03:42.031570 1484132 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-1142328/.minikube/bin
	I1209 06:03:42.032084 1484132 out.go:368] Setting JSON to false
	I1209 06:03:42.032985 1484132 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":31545,"bootTime":1765228677,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1209 06:03:42.033063 1484132 start.go:143] virtualization:  
	I1209 06:03:42.036646 1484132 out.go:179] * [enable-default-cni-132757] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1209 06:03:42.041296 1484132 out.go:179]   - MINIKUBE_LOCATION=22081
	I1209 06:03:42.041461 1484132 notify.go:221] Checking for updates...
	I1209 06:03:42.048065 1484132 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 06:03:42.051446 1484132 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22081-1142328/kubeconfig
	I1209 06:03:42.054640 1484132 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-1142328/.minikube
	I1209 06:03:42.058091 1484132 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1209 06:03:42.061345 1484132 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 06:03:42.065222 1484132 config.go:182] Loaded profile config "no-preload-842269": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1209 06:03:42.065380 1484132 driver.go:422] Setting default libvirt URI to qemu:///system
	I1209 06:03:42.112061 1484132 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1209 06:03:42.112232 1484132 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 06:03:42.206435 1484132 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-09 06:03:42.184883452 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1209 06:03:42.206559 1484132 docker.go:319] overlay module found
	I1209 06:03:42.210253 1484132 out.go:179] * Using the docker driver based on user configuration
	I1209 06:03:42.215085 1484132 start.go:309] selected driver: docker
	I1209 06:03:42.215119 1484132 start.go:927] validating driver "docker" against <nil>
	I1209 06:03:42.215136 1484132 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 06:03:42.216162 1484132 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 06:03:42.278660 1484132 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-09 06:03:42.267624043 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1209 06:03:42.278886 1484132 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	E1209 06:03:42.279165 1484132 start_flags.go:481] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I1209 06:03:42.279221 1484132 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 06:03:42.282342 1484132 out.go:179] * Using Docker driver with root privileges
	I1209 06:03:42.285391 1484132 cni.go:84] Creating CNI manager for "bridge"
	I1209 06:03:42.285437 1484132 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1209 06:03:42.285539 1484132 start.go:353] cluster config:
	{Name:enable-default-cni-132757 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:enable-default-cni-132757 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 06:03:42.288902 1484132 out.go:179] * Starting "enable-default-cni-132757" primary control-plane node in "enable-default-cni-132757" cluster
	I1209 06:03:42.291945 1484132 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1209 06:03:42.295127 1484132 out.go:179] * Pulling base image v0.0.48-1765184860-22066 ...
	I1209 06:03:42.298258 1484132 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1209 06:03:42.298329 1484132 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4
	I1209 06:03:42.298342 1484132 cache.go:65] Caching tarball of preloaded images
	I1209 06:03:42.298375 1484132 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local docker daemon
	I1209 06:03:42.298459 1484132 preload.go:238] Found /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1209 06:03:42.298472 1484132 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on containerd
	I1209 06:03:42.298586 1484132 profile.go:143] Saving config to /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/enable-default-cni-132757/config.json ...
	I1209 06:03:42.298605 1484132 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/enable-default-cni-132757/config.json: {Name:mkd5e1f3953c6009731e8a120eb4326baece5abc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 06:03:42.321031 1484132 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local docker daemon, skipping pull
	I1209 06:03:42.321060 1484132 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c exists in daemon, skipping load
	I1209 06:03:42.321093 1484132 cache.go:243] Successfully downloaded all kic artifacts
	I1209 06:03:42.321135 1484132 start.go:360] acquireMachinesLock for enable-default-cni-132757: {Name:mk22b1378fee569ade946bc5103080968dbdafaf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 06:03:42.321269 1484132 start.go:364] duration metric: took 109.51µs to acquireMachinesLock for "enable-default-cni-132757"
	I1209 06:03:42.321300 1484132 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-132757 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:enable-default-cni-132757 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1209 06:03:42.321378 1484132 start.go:125] createHost starting for "" (driver="docker")
	I1209 06:03:42.325167 1484132 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1209 06:03:42.325501 1484132 start.go:159] libmachine.API.Create for "enable-default-cni-132757" (driver="docker")
	I1209 06:03:42.325544 1484132 client.go:173] LocalClient.Create starting
	I1209 06:03:42.325616 1484132 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem
	I1209 06:03:42.325657 1484132 main.go:143] libmachine: Decoding PEM data...
	I1209 06:03:42.325677 1484132 main.go:143] libmachine: Parsing certificate...
	I1209 06:03:42.325748 1484132 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/cert.pem
	I1209 06:03:42.325777 1484132 main.go:143] libmachine: Decoding PEM data...
	I1209 06:03:42.325789 1484132 main.go:143] libmachine: Parsing certificate...
	I1209 06:03:42.326198 1484132 cli_runner.go:164] Run: docker network inspect enable-default-cni-132757 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1209 06:03:42.344646 1484132 cli_runner.go:211] docker network inspect enable-default-cni-132757 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1209 06:03:42.344736 1484132 network_create.go:284] running [docker network inspect enable-default-cni-132757] to gather additional debugging logs...
	I1209 06:03:42.344761 1484132 cli_runner.go:164] Run: docker network inspect enable-default-cni-132757
	W1209 06:03:42.362364 1484132 cli_runner.go:211] docker network inspect enable-default-cni-132757 returned with exit code 1
	I1209 06:03:42.362395 1484132 network_create.go:287] error running [docker network inspect enable-default-cni-132757]: docker network inspect enable-default-cni-132757: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network enable-default-cni-132757 not found
	I1209 06:03:42.362409 1484132 network_create.go:289] output of [docker network inspect enable-default-cni-132757]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network enable-default-cni-132757 not found
	
	** /stderr **
	I1209 06:03:42.362517 1484132 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1209 06:03:42.381963 1484132 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-7a15eec16b1a IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:8a:b7:58:bc:12:6c} reservation:<nil>}
	I1209 06:03:42.382362 1484132 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-fcb9e6b38e8e IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:56:c3:7a:b4:06:4b} reservation:<nil>}
	I1209 06:03:42.382619 1484132 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-8c1346c67d6b IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:82:10:14:75:55:fb} reservation:<nil>}
	I1209 06:03:42.383066 1484132 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001999d80}
	I1209 06:03:42.383090 1484132 network_create.go:124] attempt to create docker network enable-default-cni-132757 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1209 06:03:42.383159 1484132 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=enable-default-cni-132757 enable-default-cni-132757
	I1209 06:03:42.451221 1484132 network_create.go:108] docker network enable-default-cni-132757 192.168.76.0/24 created
	I1209 06:03:42.451251 1484132 kic.go:121] calculated static IP "192.168.76.2" for the "enable-default-cni-132757" container
	I1209 06:03:42.451335 1484132 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1209 06:03:42.476495 1484132 cli_runner.go:164] Run: docker volume create enable-default-cni-132757 --label name.minikube.sigs.k8s.io=enable-default-cni-132757 --label created_by.minikube.sigs.k8s.io=true
	I1209 06:03:42.496393 1484132 oci.go:103] Successfully created a docker volume enable-default-cni-132757
	I1209 06:03:42.496495 1484132 cli_runner.go:164] Run: docker run --rm --name enable-default-cni-132757-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=enable-default-cni-132757 --entrypoint /usr/bin/test -v enable-default-cni-132757:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c -d /var/lib
	I1209 06:03:43.022253 1484132 oci.go:107] Successfully prepared a docker volume enable-default-cni-132757
	I1209 06:03:43.022329 1484132 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1209 06:03:43.022342 1484132 kic.go:194] Starting extracting preloaded images to volume ...
	I1209 06:03:43.022407 1484132 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v enable-default-cni-132757:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c -I lz4 -xf /preloaded.tar -C /extractDir
	I1209 06:03:47.159565 1484132 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v enable-default-cni-132757:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c -I lz4 -xf /preloaded.tar -C /extractDir: (4.137119364s)
	I1209 06:03:47.159598 1484132 kic.go:203] duration metric: took 4.13725253s to extract preloaded images to volume ...
	W1209 06:03:47.159738 1484132 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1209 06:03:47.159850 1484132 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1209 06:03:47.210593 1484132 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname enable-default-cni-132757 --name enable-default-cni-132757 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=enable-default-cni-132757 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=enable-default-cni-132757 --network enable-default-cni-132757 --ip 192.168.76.2 --volume enable-default-cni-132757:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c
	I1209 06:03:47.499723 1484132 cli_runner.go:164] Run: docker container inspect enable-default-cni-132757 --format={{.State.Running}}
	I1209 06:03:47.519712 1484132 cli_runner.go:164] Run: docker container inspect enable-default-cni-132757 --format={{.State.Status}}
	I1209 06:03:47.541860 1484132 cli_runner.go:164] Run: docker exec enable-default-cni-132757 stat /var/lib/dpkg/alternatives/iptables
	I1209 06:03:47.595049 1484132 oci.go:144] the created container "enable-default-cni-132757" has a running status.
	I1209 06:03:47.595086 1484132 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22081-1142328/.minikube/machines/enable-default-cni-132757/id_rsa...
	I1209 06:03:47.729394 1484132 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22081-1142328/.minikube/machines/enable-default-cni-132757/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1209 06:03:47.756909 1484132 cli_runner.go:164] Run: docker container inspect enable-default-cni-132757 --format={{.State.Status}}
	I1209 06:03:47.779216 1484132 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1209 06:03:47.779240 1484132 kic_runner.go:114] Args: [docker exec --privileged enable-default-cni-132757 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1209 06:03:47.843820 1484132 cli_runner.go:164] Run: docker container inspect enable-default-cni-132757 --format={{.State.Status}}
	I1209 06:03:47.864259 1484132 machine.go:94] provisionDockerMachine start ...
	I1209 06:03:47.864364 1484132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-132757
	I1209 06:03:47.884969 1484132 main.go:143] libmachine: Using SSH client type: native
	I1209 06:03:47.885317 1484132 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 34240 <nil> <nil>}
	I1209 06:03:47.885333 1484132 main.go:143] libmachine: About to run SSH command:
	hostname
	I1209 06:03:47.886008 1484132 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1209 06:03:51.039441 1484132 main.go:143] libmachine: SSH cmd err, output: <nil>: enable-default-cni-132757
	
	I1209 06:03:51.039466 1484132 ubuntu.go:182] provisioning hostname "enable-default-cni-132757"
	I1209 06:03:51.039531 1484132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-132757
	I1209 06:03:51.056565 1484132 main.go:143] libmachine: Using SSH client type: native
	I1209 06:03:51.056875 1484132 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 34240 <nil> <nil>}
	I1209 06:03:51.056891 1484132 main.go:143] libmachine: About to run SSH command:
	sudo hostname enable-default-cni-132757 && echo "enable-default-cni-132757" | sudo tee /etc/hostname
	I1209 06:03:51.216997 1484132 main.go:143] libmachine: SSH cmd err, output: <nil>: enable-default-cni-132757
	
	I1209 06:03:51.217075 1484132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-132757
	I1209 06:03:51.234552 1484132 main.go:143] libmachine: Using SSH client type: native
	I1209 06:03:51.234872 1484132 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 34240 <nil> <nil>}
	I1209 06:03:51.234889 1484132 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\senable-default-cni-132757' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 enable-default-cni-132757/g' /etc/hosts;
				else 
					echo '127.0.1.1 enable-default-cni-132757' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 06:03:51.388263 1484132 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1209 06:03:51.388299 1484132 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22081-1142328/.minikube CaCertPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22081-1142328/.minikube}
	I1209 06:03:51.388322 1484132 ubuntu.go:190] setting up certificates
	I1209 06:03:51.388338 1484132 provision.go:84] configureAuth start
	I1209 06:03:51.388399 1484132 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" enable-default-cni-132757
	I1209 06:03:51.405224 1484132 provision.go:143] copyHostCerts
	I1209 06:03:51.405297 1484132 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.pem, removing ...
	I1209 06:03:51.405312 1484132 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.pem
	I1209 06:03:51.405389 1484132 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.pem (1078 bytes)
	I1209 06:03:51.405484 1484132 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-1142328/.minikube/cert.pem, removing ...
	I1209 06:03:51.405494 1484132 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-1142328/.minikube/cert.pem
	I1209 06:03:51.405521 1484132 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22081-1142328/.minikube/cert.pem (1123 bytes)
	I1209 06:03:51.405573 1484132 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-1142328/.minikube/key.pem, removing ...
	I1209 06:03:51.405583 1484132 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-1142328/.minikube/key.pem
	I1209 06:03:51.405608 1484132 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22081-1142328/.minikube/key.pem (1675 bytes)
	I1209 06:03:51.405658 1484132 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca-key.pem org=jenkins.enable-default-cni-132757 san=[127.0.0.1 192.168.76.2 enable-default-cni-132757 localhost minikube]
	I1209 06:03:52.091548 1484132 provision.go:177] copyRemoteCerts
	I1209 06:03:52.091626 1484132 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 06:03:52.091675 1484132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-132757
	I1209 06:03:52.109760 1484132 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34240 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/enable-default-cni-132757/id_rsa Username:docker}
	I1209 06:03:52.218186 1484132 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1209 06:03:52.242583 1484132 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1209 06:03:52.260060 1484132 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1209 06:03:52.277638 1484132 provision.go:87] duration metric: took 889.267137ms to configureAuth
	I1209 06:03:52.277667 1484132 ubuntu.go:206] setting minikube options for container-runtime
	I1209 06:03:52.277861 1484132 config.go:182] Loaded profile config "enable-default-cni-132757": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1209 06:03:52.277874 1484132 machine.go:97] duration metric: took 4.413593964s to provisionDockerMachine
	I1209 06:03:52.277881 1484132 client.go:176] duration metric: took 9.952328669s to LocalClient.Create
	I1209 06:03:52.277893 1484132 start.go:167] duration metric: took 9.952394685s to libmachine.API.Create "enable-default-cni-132757"
	I1209 06:03:52.277910 1484132 start.go:293] postStartSetup for "enable-default-cni-132757" (driver="docker")
	I1209 06:03:52.277925 1484132 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 06:03:52.277982 1484132 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 06:03:52.278022 1484132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-132757
	I1209 06:03:52.294833 1484132 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34240 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/enable-default-cni-132757/id_rsa Username:docker}
	I1209 06:03:52.399603 1484132 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 06:03:52.402621 1484132 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1209 06:03:52.402649 1484132 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1209 06:03:52.402660 1484132 filesync.go:126] Scanning /home/jenkins/minikube-integration/22081-1142328/.minikube/addons for local assets ...
	I1209 06:03:52.402707 1484132 filesync.go:126] Scanning /home/jenkins/minikube-integration/22081-1142328/.minikube/files for local assets ...
	I1209 06:03:52.402786 1484132 filesync.go:149] local asset: /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/ssl/certs/11442312.pem -> 11442312.pem in /etc/ssl/certs
	I1209 06:03:52.402891 1484132 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 06:03:52.409898 1484132 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/ssl/certs/11442312.pem --> /etc/ssl/certs/11442312.pem (1708 bytes)
	I1209 06:03:52.426664 1484132 start.go:296] duration metric: took 148.734382ms for postStartSetup
	I1209 06:03:52.427024 1484132 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" enable-default-cni-132757
	I1209 06:03:52.443592 1484132 profile.go:143] Saving config to /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/enable-default-cni-132757/config.json ...
	I1209 06:03:52.443879 1484132 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1209 06:03:52.443939 1484132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-132757
	I1209 06:03:52.460149 1484132 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34240 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/enable-default-cni-132757/id_rsa Username:docker}
	I1209 06:03:52.560796 1484132 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1209 06:03:52.565710 1484132 start.go:128] duration metric: took 10.244317482s to createHost
	I1209 06:03:52.565738 1484132 start.go:83] releasing machines lock for "enable-default-cni-132757", held for 10.244455267s
	I1209 06:03:52.565831 1484132 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" enable-default-cni-132757
	I1209 06:03:52.582793 1484132 ssh_runner.go:195] Run: cat /version.json
	I1209 06:03:52.582860 1484132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-132757
	I1209 06:03:52.583126 1484132 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 06:03:52.583186 1484132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-132757
	I1209 06:03:52.600588 1484132 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34240 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/enable-default-cni-132757/id_rsa Username:docker}
	I1209 06:03:52.602061 1484132 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34240 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/enable-default-cni-132757/id_rsa Username:docker}
	I1209 06:03:52.793090 1484132 ssh_runner.go:195] Run: systemctl --version
	I1209 06:03:52.799363 1484132 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 06:03:52.803926 1484132 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 06:03:52.804036 1484132 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 06:03:52.831738 1484132 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1209 06:03:52.831766 1484132 start.go:496] detecting cgroup driver to use...
	I1209 06:03:52.831799 1484132 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1209 06:03:52.831849 1484132 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1209 06:03:52.847499 1484132 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1209 06:03:52.860566 1484132 docker.go:218] disabling cri-docker service (if available) ...
	I1209 06:03:52.860625 1484132 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 06:03:52.878379 1484132 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 06:03:52.897234 1484132 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 06:03:53.034906 1484132 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 06:03:53.152586 1484132 docker.go:234] disabling docker service ...
	I1209 06:03:53.152651 1484132 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 06:03:53.173614 1484132 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 06:03:53.186559 1484132 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 06:03:53.296319 1484132 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 06:03:53.416617 1484132 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 06:03:53.429568 1484132 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 06:03:53.443844 1484132 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1209 06:03:53.453166 1484132 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1209 06:03:53.461976 1484132 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1209 06:03:53.462044 1484132 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1209 06:03:53.470228 1484132 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1209 06:03:53.478828 1484132 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1209 06:03:53.487280 1484132 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1209 06:03:53.495680 1484132 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 06:03:53.503945 1484132 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1209 06:03:53.512785 1484132 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1209 06:03:53.521431 1484132 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1209 06:03:53.530316 1484132 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 06:03:53.537269 1484132 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 06:03:53.544197 1484132 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 06:03:53.658696 1484132 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1209 06:03:53.806634 1484132 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1209 06:03:53.806701 1484132 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1209 06:03:53.810470 1484132 start.go:564] Will wait 60s for crictl version
	I1209 06:03:53.810533 1484132 ssh_runner.go:195] Run: which crictl
	I1209 06:03:53.813977 1484132 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1209 06:03:53.842095 1484132 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1209 06:03:53.842182 1484132 ssh_runner.go:195] Run: containerd --version
	I1209 06:03:53.861581 1484132 ssh_runner.go:195] Run: containerd --version
	I1209 06:03:53.885127 1484132 out.go:179] * Preparing Kubernetes v1.34.2 on containerd 2.2.0 ...
	I1209 06:03:53.888035 1484132 cli_runner.go:164] Run: docker network inspect enable-default-cni-132757 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1209 06:03:53.903164 1484132 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1209 06:03:53.906736 1484132 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 06:03:53.916214 1484132 kubeadm.go:884] updating cluster {Name:enable-default-cni-132757 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:enable-default-cni-132757 Namespace:default APIServerHAVIP: APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreD
NSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 06:03:53.916330 1484132 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1209 06:03:53.916389 1484132 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 06:03:53.941915 1484132 containerd.go:627] all images are preloaded for containerd runtime.
	I1209 06:03:53.941941 1484132 containerd.go:534] Images already preloaded, skipping extraction
	I1209 06:03:53.941998 1484132 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 06:03:53.965014 1484132 containerd.go:627] all images are preloaded for containerd runtime.
	I1209 06:03:53.965035 1484132 cache_images.go:86] Images are preloaded, skipping loading
	I1209 06:03:53.965043 1484132 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.2 containerd true true} ...
	I1209 06:03:53.965165 1484132 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=enable-default-cni-132757 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:enable-default-cni-132757 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge}
	I1209 06:03:53.965239 1484132 ssh_runner.go:195] Run: sudo crictl info
	I1209 06:03:53.998139 1484132 cni.go:84] Creating CNI manager for "bridge"
	I1209 06:03:53.998175 1484132 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1209 06:03:53.998205 1484132 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:enable-default-cni-132757 NodeName:enable-default-cni-132757 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/c
a.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1209 06:03:53.998322 1484132 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "enable-default-cni-132757"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 06:03:53.998402 1484132 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1209 06:03:54.008156 1484132 binaries.go:51] Found k8s binaries, skipping transfer
	I1209 06:03:54.008240 1484132 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1209 06:03:54.016719 1484132 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (329 bytes)
	I1209 06:03:54.030374 1484132 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1209 06:03:54.043874 1484132 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2238 bytes)
	I1209 06:03:54.057109 1484132 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1209 06:03:54.060925 1484132 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 06:03:54.070619 1484132 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 06:03:54.177607 1484132 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 06:03:54.194360 1484132 certs.go:69] Setting up /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/enable-default-cni-132757 for IP: 192.168.76.2
	I1209 06:03:54.194378 1484132 certs.go:195] generating shared ca certs ...
	I1209 06:03:54.194394 1484132 certs.go:227] acquiring lock for ca certs: {Name:mk15788702f8c4e23b5aeab3f44961d296fab259 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 06:03:54.194528 1484132 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.key
	I1209 06:03:54.194570 1484132 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/proxy-client-ca.key
	I1209 06:03:54.194578 1484132 certs.go:257] generating profile certs ...
	I1209 06:03:54.194642 1484132 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/enable-default-cni-132757/client.key
	I1209 06:03:54.194652 1484132 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/enable-default-cni-132757/client.crt with IP's: []
	I1209 06:03:54.584162 1484132 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/enable-default-cni-132757/client.crt ...
	I1209 06:03:54.584199 1484132 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/enable-default-cni-132757/client.crt: {Name:mkc5d4134dd29068d61f03d7a70225f0298b5216 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 06:03:54.584605 1484132 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/enable-default-cni-132757/client.key ...
	I1209 06:03:54.584633 1484132 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/enable-default-cni-132757/client.key: {Name:mkc72d8d81fa01a60b5de1465cdcb73f5aefb82b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 06:03:54.584904 1484132 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/enable-default-cni-132757/apiserver.key.c1720b2c
	I1209 06:03:54.584927 1484132 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/enable-default-cni-132757/apiserver.crt.c1720b2c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1209 06:03:55.022724 1484132 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/enable-default-cni-132757/apiserver.crt.c1720b2c ...
	I1209 06:03:55.022762 1484132 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/enable-default-cni-132757/apiserver.crt.c1720b2c: {Name:mk742598ba70a8cfc0fc0766ac25ddeed66a4c2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 06:03:55.022966 1484132 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/enable-default-cni-132757/apiserver.key.c1720b2c ...
	I1209 06:03:55.022980 1484132 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/enable-default-cni-132757/apiserver.key.c1720b2c: {Name:mk7395c204ccbc20b7cb74241bb800421eece350 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 06:03:55.023074 1484132 certs.go:382] copying /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/enable-default-cni-132757/apiserver.crt.c1720b2c -> /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/enable-default-cni-132757/apiserver.crt
	I1209 06:03:55.023161 1484132 certs.go:386] copying /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/enable-default-cni-132757/apiserver.key.c1720b2c -> /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/enable-default-cni-132757/apiserver.key
	I1209 06:03:55.023222 1484132 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/enable-default-cni-132757/proxy-client.key
	I1209 06:03:55.023237 1484132 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/enable-default-cni-132757/proxy-client.crt with IP's: []
	I1209 06:03:55.864643 1484132 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/enable-default-cni-132757/proxy-client.crt ...
	I1209 06:03:55.864678 1484132 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/enable-default-cni-132757/proxy-client.crt: {Name:mk3cd12bc03a9e91158c787c595d8ea1ac06d0b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 06:03:55.864873 1484132 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/enable-default-cni-132757/proxy-client.key ...
	I1209 06:03:55.864889 1484132 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/enable-default-cni-132757/proxy-client.key: {Name:mk3984f0c1c46aa71110926021413b1467ace0dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 06:03:55.865112 1484132 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/1144231.pem (1338 bytes)
	W1209 06:03:55.865158 1484132 certs.go:480] ignoring /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/1144231_empty.pem, impossibly tiny 0 bytes
	I1209 06:03:55.865170 1484132 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca-key.pem (1679 bytes)
	I1209 06:03:55.865197 1484132 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem (1078 bytes)
	I1209 06:03:55.865226 1484132 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/cert.pem (1123 bytes)
	I1209 06:03:55.865250 1484132 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/key.pem (1675 bytes)
	I1209 06:03:55.865298 1484132 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/ssl/certs/11442312.pem (1708 bytes)
	I1209 06:03:55.865857 1484132 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 06:03:55.883257 1484132 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1209 06:03:55.900604 1484132 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 06:03:55.918905 1484132 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1209 06:03:55.938466 1484132 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/enable-default-cni-132757/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1209 06:03:55.956855 1484132 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/enable-default-cni-132757/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1209 06:03:55.975184 1484132 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/enable-default-cni-132757/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 06:03:55.993928 1484132 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/enable-default-cni-132757/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1209 06:03:56.013560 1484132 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/1144231.pem --> /usr/share/ca-certificates/1144231.pem (1338 bytes)
	I1209 06:03:56.032898 1484132 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/ssl/certs/11442312.pem --> /usr/share/ca-certificates/11442312.pem (1708 bytes)
	I1209 06:03:56.051250 1484132 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 06:03:56.069974 1484132 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 06:03:56.083646 1484132 ssh_runner.go:195] Run: openssl version
	I1209 06:03:56.090220 1484132 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/11442312.pem
	I1209 06:03:56.098092 1484132 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/11442312.pem /etc/ssl/certs/11442312.pem
	I1209 06:03:56.106139 1484132 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11442312.pem
	I1209 06:03:56.109942 1484132 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 04:18 /usr/share/ca-certificates/11442312.pem
	I1209 06:03:56.110011 1484132 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11442312.pem
	I1209 06:03:56.150663 1484132 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1209 06:03:56.158085 1484132 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/11442312.pem /etc/ssl/certs/3ec20f2e.0
	I1209 06:03:56.165347 1484132 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1209 06:03:56.172734 1484132 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1209 06:03:56.179811 1484132 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 06:03:56.183487 1484132 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 04:09 /usr/share/ca-certificates/minikubeCA.pem
	I1209 06:03:56.183578 1484132 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 06:03:56.224137 1484132 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1209 06:03:56.231740 1484132 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1209 06:03:56.238860 1484132 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1144231.pem
	I1209 06:03:56.245751 1484132 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1144231.pem /etc/ssl/certs/1144231.pem
	I1209 06:03:56.253201 1484132 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1144231.pem
	I1209 06:03:56.257132 1484132 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 04:18 /usr/share/ca-certificates/1144231.pem
	I1209 06:03:56.257201 1484132 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1144231.pem
	I1209 06:03:56.297800 1484132 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1209 06:03:56.305090 1484132 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/1144231.pem /etc/ssl/certs/51391683.0
	I1209 06:03:56.312055 1484132 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 06:03:56.315236 1484132 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1209 06:03:56.315306 1484132 kubeadm.go:401] StartCluster: {Name:enable-default-cni-132757 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:enable-default-cni-132757 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSL
og:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 06:03:56.315401 1484132 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1209 06:03:56.315456 1484132 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 06:03:56.342267 1484132 cri.go:89] found id: ""
	I1209 06:03:56.342338 1484132 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1209 06:03:56.349788 1484132 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 06:03:56.357125 1484132 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1209 06:03:56.357211 1484132 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 06:03:56.364625 1484132 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 06:03:56.364649 1484132 kubeadm.go:158] found existing configuration files:
	
	I1209 06:03:56.364725 1484132 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1209 06:03:56.372053 1484132 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 06:03:56.372116 1484132 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 06:03:56.379004 1484132 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1209 06:03:56.386316 1484132 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 06:03:56.386409 1484132 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 06:03:56.393222 1484132 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1209 06:03:56.400613 1484132 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 06:03:56.400706 1484132 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 06:03:56.407675 1484132 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1209 06:03:56.414850 1484132 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 06:03:56.414952 1484132 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 06:03:56.422006 1484132 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1209 06:03:56.472771 1484132 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1209 06:03:56.473350 1484132 kubeadm.go:319] [preflight] Running pre-flight checks
	I1209 06:03:56.495852 1484132 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1209 06:03:56.495992 1484132 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1209 06:03:56.496099 1484132 kubeadm.go:319] OS: Linux
	I1209 06:03:56.496164 1484132 kubeadm.go:319] CGROUPS_CPU: enabled
	I1209 06:03:56.496237 1484132 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1209 06:03:56.496299 1484132 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1209 06:03:56.496378 1484132 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1209 06:03:56.496445 1484132 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1209 06:03:56.496518 1484132 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1209 06:03:56.496580 1484132 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1209 06:03:56.496654 1484132 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1209 06:03:56.496717 1484132 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1209 06:03:56.561797 1484132 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1209 06:03:56.561951 1484132 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1209 06:03:56.562064 1484132 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1209 06:03:56.566844 1484132 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1209 06:03:56.573543 1484132 out.go:252]   - Generating certificates and keys ...
	I1209 06:03:56.573687 1484132 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1209 06:03:56.573790 1484132 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1209 06:03:57.212708 1484132 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1209 06:03:57.542181 1484132 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1209 06:03:58.145162 1484132 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1209 06:03:58.331219 1484132 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1209 06:03:59.328586 1484132 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1209 06:03:59.328944 1484132 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [enable-default-cni-132757 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1209 06:03:59.675903 1484132 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1209 06:03:59.676300 1484132 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [enable-default-cni-132757 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1209 06:03:59.760398 1484132 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1209 06:04:01.010259 1484132 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1209 06:04:01.316142 1484132 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1209 06:04:01.316559 1484132 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1209 06:04:01.769026 1484132 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1209 06:04:02.122838 1484132 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1209 06:04:02.291629 1484132 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1209 06:04:02.860145 1484132 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1209 06:04:03.089810 1484132 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1209 06:04:03.090490 1484132 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1209 06:04:03.095085 1484132 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1209 06:04:03.098482 1484132 out.go:252]   - Booting up control plane ...
	I1209 06:04:03.098613 1484132 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1209 06:04:03.098733 1484132 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1209 06:04:03.099453 1484132 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1209 06:04:03.115607 1484132 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1209 06:04:03.115778 1484132 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1209 06:04:03.123502 1484132 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1209 06:04:03.123983 1484132 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1209 06:04:03.124116 1484132 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1209 06:04:03.272826 1484132 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1209 06:04:03.272953 1484132 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1209 06:04:05.274598 1484132 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 2.001817716s
	I1209 06:04:05.278783 1484132 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1209 06:04:05.278907 1484132 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1209 06:04:05.279044 1484132 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1209 06:04:05.279146 1484132 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1209 06:04:08.450337 1484132 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.171022016s
	I1209 06:04:09.592451 1484132 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.313735454s
	I1209 06:04:11.280882 1484132 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.002064515s
	I1209 06:04:11.313995 1484132 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1209 06:04:11.328037 1484132 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1209 06:04:11.343484 1484132 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1209 06:04:11.343725 1484132 kubeadm.go:319] [mark-control-plane] Marking the node enable-default-cni-132757 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1209 06:04:11.355399 1484132 kubeadm.go:319] [bootstrap-token] Using token: s13qqk.b7tuyn8gr6gx550y
	I1209 06:04:11.360473 1484132 out.go:252]   - Configuring RBAC rules ...
	I1209 06:04:11.360626 1484132 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1209 06:04:11.362242 1484132 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1209 06:04:11.369622 1484132 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1209 06:04:11.373446 1484132 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1209 06:04:11.386312 1484132 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1209 06:04:11.401562 1484132 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1209 06:04:11.691571 1484132 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1209 06:04:12.126793 1484132 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1209 06:04:12.687512 1484132 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1209 06:04:12.688840 1484132 kubeadm.go:319] 
	I1209 06:04:12.688951 1484132 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1209 06:04:12.689001 1484132 kubeadm.go:319] 
	I1209 06:04:12.689104 1484132 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1209 06:04:12.689113 1484132 kubeadm.go:319] 
	I1209 06:04:12.689139 1484132 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1209 06:04:12.689244 1484132 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1209 06:04:12.689344 1484132 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1209 06:04:12.689383 1484132 kubeadm.go:319] 
	I1209 06:04:12.689490 1484132 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1209 06:04:12.689502 1484132 kubeadm.go:319] 
	I1209 06:04:12.689551 1484132 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1209 06:04:12.689560 1484132 kubeadm.go:319] 
	I1209 06:04:12.689612 1484132 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1209 06:04:12.689691 1484132 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1209 06:04:12.689769 1484132 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1209 06:04:12.689777 1484132 kubeadm.go:319] 
	I1209 06:04:12.689865 1484132 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1209 06:04:12.689946 1484132 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1209 06:04:12.689954 1484132 kubeadm.go:319] 
	I1209 06:04:12.690086 1484132 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token s13qqk.b7tuyn8gr6gx550y \
	I1209 06:04:12.690279 1484132 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:98df943b2e6f85649a9af8e221693a225a3faf636e29a801d7cbe99d348eaf5d \
	I1209 06:04:12.690305 1484132 kubeadm.go:319] 	--control-plane 
	I1209 06:04:12.690309 1484132 kubeadm.go:319] 
	I1209 06:04:12.690404 1484132 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1209 06:04:12.690414 1484132 kubeadm.go:319] 
	I1209 06:04:12.690499 1484132 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token s13qqk.b7tuyn8gr6gx550y \
	I1209 06:04:12.690609 1484132 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:98df943b2e6f85649a9af8e221693a225a3faf636e29a801d7cbe99d348eaf5d 
	I1209 06:04:12.694887 1484132 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1209 06:04:12.695221 1484132 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1209 06:04:12.695400 1484132 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1209 06:04:12.695445 1484132 cni.go:84] Creating CNI manager for "bridge"
	I1209 06:04:12.699824 1484132 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1209 06:04:12.702685 1484132 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1209 06:04:12.710698 1484132 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1209 06:04:12.725033 1484132 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1209 06:04:12.725145 1484132 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 06:04:12.725167 1484132 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes enable-default-cni-132757 minikube.k8s.io/updated_at=2025_12_09T06_04_12_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=604647ccc1f2cd4d60ec88f36255b328e04e507d minikube.k8s.io/name=enable-default-cni-132757 minikube.k8s.io/primary=true
	I1209 06:04:12.873221 1484132 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 06:04:12.873298 1484132 ops.go:34] apiserver oom_adj: -16
	I1209 06:04:13.374136 1484132 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 06:04:13.873301 1484132 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 06:04:14.373886 1484132 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 06:04:14.873631 1484132 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 06:04:15.373632 1484132 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 06:04:15.873433 1484132 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 06:04:16.373602 1484132 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 06:04:16.873933 1484132 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 06:04:17.373343 1484132 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 06:04:17.873932 1484132 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 06:04:18.007679 1484132 kubeadm.go:1114] duration metric: took 5.282608227s to wait for elevateKubeSystemPrivileges
	I1209 06:04:18.007712 1484132 kubeadm.go:403] duration metric: took 21.692430264s to StartCluster
	I1209 06:04:18.007731 1484132 settings.go:142] acquiring lock: {Name:mk8fa744e3d74bf8a1cbf5ac275c9f1969ad91a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 06:04:18.007801 1484132 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22081-1142328/kubeconfig
	I1209 06:04:18.008894 1484132 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1142328/kubeconfig: {Name:mk0dc127429f88dc4fdfb2d110deebc58207c1b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 06:04:18.009169 1484132 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1209 06:04:18.009302 1484132 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1209 06:04:18.009579 1484132 config.go:182] Loaded profile config "enable-default-cni-132757": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1209 06:04:18.009624 1484132 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1209 06:04:18.009689 1484132 addons.go:70] Setting storage-provisioner=true in profile "enable-default-cni-132757"
	I1209 06:04:18.009710 1484132 addons.go:239] Setting addon storage-provisioner=true in "enable-default-cni-132757"
	I1209 06:04:18.009738 1484132 host.go:66] Checking if "enable-default-cni-132757" exists ...
	I1209 06:04:18.010283 1484132 cli_runner.go:164] Run: docker container inspect enable-default-cni-132757 --format={{.State.Status}}
	I1209 06:04:18.010694 1484132 addons.go:70] Setting default-storageclass=true in profile "enable-default-cni-132757"
	I1209 06:04:18.010719 1484132 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "enable-default-cni-132757"
	I1209 06:04:18.011017 1484132 cli_runner.go:164] Run: docker container inspect enable-default-cni-132757 --format={{.State.Status}}
	I1209 06:04:18.013599 1484132 out.go:179] * Verifying Kubernetes components...
	I1209 06:04:18.017016 1484132 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 06:04:18.051408 1484132 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 06:04:18.052993 1484132 addons.go:239] Setting addon default-storageclass=true in "enable-default-cni-132757"
	I1209 06:04:18.053028 1484132 host.go:66] Checking if "enable-default-cni-132757" exists ...
	I1209 06:04:18.053458 1484132 cli_runner.go:164] Run: docker container inspect enable-default-cni-132757 --format={{.State.Status}}
	I1209 06:04:18.055740 1484132 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 06:04:18.055769 1484132 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1209 06:04:18.055826 1484132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-132757
	I1209 06:04:18.081239 1484132 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1209 06:04:18.081265 1484132 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1209 06:04:18.081353 1484132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-132757
	I1209 06:04:18.096964 1484132 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34240 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/enable-default-cni-132757/id_rsa Username:docker}
	I1209 06:04:18.125690 1484132 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34240 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/enable-default-cni-132757/id_rsa Username:docker}
	I1209 06:04:18.390400 1484132 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1209 06:04:18.390549 1484132 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 06:04:18.458674 1484132 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 06:04:18.624056 1484132 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1209 06:04:19.368910 1484132 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1209 06:04:19.369998 1484132 node_ready.go:35] waiting up to 15m0s for node "enable-default-cni-132757" to be "Ready" ...
	I1209 06:04:19.397180 1484132 node_ready.go:49] node "enable-default-cni-132757" is "Ready"
	I1209 06:04:19.397205 1484132 node_ready.go:38] duration metric: took 27.066315ms for node "enable-default-cni-132757" to be "Ready" ...
	I1209 06:04:19.397219 1484132 api_server.go:52] waiting for apiserver process to appear ...
	I1209 06:04:19.397273 1484132 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 06:04:19.584188 1484132 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.125478631s)
	I1209 06:04:19.584423 1484132 api_server.go:72] duration metric: took 1.575221077s to wait for apiserver process to appear ...
	I1209 06:04:19.584432 1484132 api_server.go:88] waiting for apiserver healthz status ...
	I1209 06:04:19.584448 1484132 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1209 06:04:19.599054 1484132 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1209 06:04:19.601015 1484132 api_server.go:141] control plane version: v1.34.2
	I1209 06:04:19.601047 1484132 api_server.go:131] duration metric: took 16.608495ms to wait for apiserver health ...
	I1209 06:04:19.601056 1484132 system_pods.go:43] waiting for kube-system pods to appear ...
	I1209 06:04:19.606831 1484132 system_pods.go:59] 8 kube-system pods found
	I1209 06:04:19.606921 1484132 system_pods.go:61] "coredns-66bc5c9577-pccpg" [42b01cbc-a4b6-4c6e-b0c7-ed7406d1edc4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1209 06:04:19.606946 1484132 system_pods.go:61] "coredns-66bc5c9577-vk7w8" [bea9d8cf-cf17-43f2-bdfc-dc758ddf263f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1209 06:04:19.606990 1484132 system_pods.go:61] "etcd-enable-default-cni-132757" [bca68365-dafc-48d9-ae17-82cbab4f3b77] Running
	I1209 06:04:19.607016 1484132 system_pods.go:61] "kube-apiserver-enable-default-cni-132757" [8bb19219-4c2b-4d40-b84b-4995b3e46448] Running
	I1209 06:04:19.607041 1484132 system_pods.go:61] "kube-controller-manager-enable-default-cni-132757" [35eafe24-8615-4d6e-8e3f-3471c4a99757] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1209 06:04:19.607073 1484132 system_pods.go:61] "kube-proxy-5jx4j" [0cacd44b-741c-4746-92bb-a7790730a3df] Running
	I1209 06:04:19.607096 1484132 system_pods.go:61] "kube-scheduler-enable-default-cni-132757" [b90bf69c-db35-4090-99d6-7685f9d22157] Running
	I1209 06:04:19.607116 1484132 system_pods.go:61] "storage-provisioner" [c06c91a7-b195-4571-ab83-a0c8ce4f4420] Pending
	I1209 06:04:19.607136 1484132 system_pods.go:74] duration metric: took 6.072367ms to wait for pod list to return data ...
	I1209 06:04:19.607174 1484132 default_sa.go:34] waiting for default service account to be created ...
	I1209 06:04:19.609796 1484132 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1209 06:04:19.611054 1484132 default_sa.go:45] found service account: "default"
	I1209 06:04:19.611076 1484132 default_sa.go:55] duration metric: took 3.882323ms for default service account to be created ...
	I1209 06:04:19.611085 1484132 system_pods.go:116] waiting for k8s-apps to be running ...
	I1209 06:04:19.613339 1484132 addons.go:530] duration metric: took 1.603703413s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1209 06:04:19.617150 1484132 system_pods.go:86] 8 kube-system pods found
	I1209 06:04:19.617178 1484132 system_pods.go:89] "coredns-66bc5c9577-pccpg" [42b01cbc-a4b6-4c6e-b0c7-ed7406d1edc4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1209 06:04:19.617187 1484132 system_pods.go:89] "coredns-66bc5c9577-vk7w8" [bea9d8cf-cf17-43f2-bdfc-dc758ddf263f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1209 06:04:19.617193 1484132 system_pods.go:89] "etcd-enable-default-cni-132757" [bca68365-dafc-48d9-ae17-82cbab4f3b77] Running
	I1209 06:04:19.617199 1484132 system_pods.go:89] "kube-apiserver-enable-default-cni-132757" [8bb19219-4c2b-4d40-b84b-4995b3e46448] Running
	I1209 06:04:19.617206 1484132 system_pods.go:89] "kube-controller-manager-enable-default-cni-132757" [35eafe24-8615-4d6e-8e3f-3471c4a99757] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1209 06:04:19.617210 1484132 system_pods.go:89] "kube-proxy-5jx4j" [0cacd44b-741c-4746-92bb-a7790730a3df] Running
	I1209 06:04:19.617217 1484132 system_pods.go:89] "kube-scheduler-enable-default-cni-132757" [b90bf69c-db35-4090-99d6-7685f9d22157] Running
	I1209 06:04:19.617223 1484132 system_pods.go:89] "storage-provisioner" [c06c91a7-b195-4571-ab83-a0c8ce4f4420] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1209 06:04:19.617229 1484132 system_pods.go:126] duration metric: took 6.138966ms to wait for k8s-apps to be running ...
	I1209 06:04:19.617238 1484132 system_svc.go:44] waiting for kubelet service to be running ....
	I1209 06:04:19.617296 1484132 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 06:04:19.632099 1484132 system_svc.go:56] duration metric: took 14.853087ms WaitForService to wait for kubelet
	I1209 06:04:19.632132 1484132 kubeadm.go:587] duration metric: took 1.622929507s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 06:04:19.632151 1484132 node_conditions.go:102] verifying NodePressure condition ...
	I1209 06:04:19.634916 1484132 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1209 06:04:19.634947 1484132 node_conditions.go:123] node cpu capacity is 2
	I1209 06:04:19.634961 1484132 node_conditions.go:105] duration metric: took 2.803858ms to run NodePressure ...
	I1209 06:04:19.634974 1484132 start.go:242] waiting for startup goroutines ...
	I1209 06:04:19.874584 1484132 kapi.go:214] "coredns" deployment in "kube-system" namespace and "enable-default-cni-132757" context rescaled to 1 replicas
	I1209 06:04:19.874619 1484132 start.go:247] waiting for cluster config update ...
	I1209 06:04:19.874632 1484132 start.go:256] writing updated cluster config ...
	I1209 06:04:19.874975 1484132 ssh_runner.go:195] Run: rm -f paused
	I1209 06:04:19.878492 1484132 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1209 06:04:19.882063 1484132 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-pccpg" in "kube-system" namespace to be "Ready" or be gone ...
	W1209 06:04:21.887340 1484132 pod_ready.go:104] pod "coredns-66bc5c9577-pccpg" is not "Ready", error: <nil>
	W1209 06:04:24.387349 1484132 pod_ready.go:104] pod "coredns-66bc5c9577-pccpg" is not "Ready", error: <nil>
	W1209 06:04:26.387844 1484132 pod_ready.go:104] pod "coredns-66bc5c9577-pccpg" is not "Ready", error: <nil>
	W1209 06:04:28.887357 1484132 pod_ready.go:104] pod "coredns-66bc5c9577-pccpg" is not "Ready", error: <nil>
	I1209 06:04:29.385423 1484132 pod_ready.go:99] pod "coredns-66bc5c9577-pccpg" in "kube-system" namespace is gone: getting pod "coredns-66bc5c9577-pccpg" in "kube-system" namespace (will retry): pods "coredns-66bc5c9577-pccpg" not found
	I1209 06:04:29.385490 1484132 pod_ready.go:86] duration metric: took 9.503399092s for pod "coredns-66bc5c9577-pccpg" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 06:04:29.385514 1484132 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-vk7w8" in "kube-system" namespace to be "Ready" or be gone ...
	W1209 06:04:31.391743 1484132 pod_ready.go:104] pod "coredns-66bc5c9577-vk7w8" is not "Ready", error: <nil>
	W1209 06:04:33.891526 1484132 pod_ready.go:104] pod "coredns-66bc5c9577-vk7w8" is not "Ready", error: <nil>
	W1209 06:04:36.391235 1484132 pod_ready.go:104] pod "coredns-66bc5c9577-vk7w8" is not "Ready", error: <nil>
	W1209 06:04:38.890956 1484132 pod_ready.go:104] pod "coredns-66bc5c9577-vk7w8" is not "Ready", error: <nil>
	W1209 06:04:41.391813 1484132 pod_ready.go:104] pod "coredns-66bc5c9577-vk7w8" is not "Ready", error: <nil>
	W1209 06:04:43.890502 1484132 pod_ready.go:104] pod "coredns-66bc5c9577-vk7w8" is not "Ready", error: <nil>
	W1209 06:04:45.891098 1484132 pod_ready.go:104] pod "coredns-66bc5c9577-vk7w8" is not "Ready", error: <nil>
	W1209 06:04:47.892160 1484132 pod_ready.go:104] pod "coredns-66bc5c9577-vk7w8" is not "Ready", error: <nil>
	W1209 06:04:50.390949 1484132 pod_ready.go:104] pod "coredns-66bc5c9577-vk7w8" is not "Ready", error: <nil>
	I1209 06:04:52.392501 1484132 pod_ready.go:94] pod "coredns-66bc5c9577-vk7w8" is "Ready"
	I1209 06:04:52.392530 1484132 pod_ready.go:86] duration metric: took 23.006998068s for pod "coredns-66bc5c9577-vk7w8" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 06:04:52.395373 1484132 pod_ready.go:83] waiting for pod "etcd-enable-default-cni-132757" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 06:04:52.400135 1484132 pod_ready.go:94] pod "etcd-enable-default-cni-132757" is "Ready"
	I1209 06:04:52.400166 1484132 pod_ready.go:86] duration metric: took 4.765775ms for pod "etcd-enable-default-cni-132757" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 06:04:52.402945 1484132 pod_ready.go:83] waiting for pod "kube-apiserver-enable-default-cni-132757" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 06:04:52.407910 1484132 pod_ready.go:94] pod "kube-apiserver-enable-default-cni-132757" is "Ready"
	I1209 06:04:52.407939 1484132 pod_ready.go:86] duration metric: took 4.971192ms for pod "kube-apiserver-enable-default-cni-132757" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 06:04:52.410405 1484132 pod_ready.go:83] waiting for pod "kube-controller-manager-enable-default-cni-132757" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 06:04:52.590108 1484132 pod_ready.go:94] pod "kube-controller-manager-enable-default-cni-132757" is "Ready"
	I1209 06:04:52.590135 1484132 pod_ready.go:86] duration metric: took 179.699976ms for pod "kube-controller-manager-enable-default-cni-132757" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 06:04:52.790673 1484132 pod_ready.go:83] waiting for pod "kube-proxy-5jx4j" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 06:04:53.190301 1484132 pod_ready.go:94] pod "kube-proxy-5jx4j" is "Ready"
	I1209 06:04:53.190330 1484132 pod_ready.go:86] duration metric: took 399.626304ms for pod "kube-proxy-5jx4j" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 06:04:53.390464 1484132 pod_ready.go:83] waiting for pod "kube-scheduler-enable-default-cni-132757" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 06:04:53.790495 1484132 pod_ready.go:94] pod "kube-scheduler-enable-default-cni-132757" is "Ready"
	I1209 06:04:53.790524 1484132 pod_ready.go:86] duration metric: took 400.035162ms for pod "kube-scheduler-enable-default-cni-132757" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 06:04:53.790537 1484132 pod_ready.go:40] duration metric: took 33.912015439s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1209 06:04:53.848823 1484132 start.go:625] kubectl: 1.33.2, cluster: 1.34.2 (minor skew: 1)
	I1209 06:04:53.851862 1484132 out.go:179] * Done! kubectl is now configured to use "enable-default-cni-132757" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 09 05:45:25 no-preload-842269 containerd[555]: time="2025-12-09T05:45:25.686544286Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 09 05:45:25 no-preload-842269 containerd[555]: time="2025-12-09T05:45:25.686621568Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 09 05:45:25 no-preload-842269 containerd[555]: time="2025-12-09T05:45:25.686720651Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 09 05:45:25 no-preload-842269 containerd[555]: time="2025-12-09T05:45:25.686789392Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 09 05:45:25 no-preload-842269 containerd[555]: time="2025-12-09T05:45:25.686856097Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 09 05:45:25 no-preload-842269 containerd[555]: time="2025-12-09T05:45:25.686918545Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 09 05:45:25 no-preload-842269 containerd[555]: time="2025-12-09T05:45:25.686973706Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 09 05:45:25 no-preload-842269 containerd[555]: time="2025-12-09T05:45:25.687041799Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 09 05:45:25 no-preload-842269 containerd[555]: time="2025-12-09T05:45:25.687108406Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 09 05:45:25 no-preload-842269 containerd[555]: time="2025-12-09T05:45:25.687193261Z" level=info msg="Connect containerd service"
	Dec 09 05:45:25 no-preload-842269 containerd[555]: time="2025-12-09T05:45:25.687520145Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 09 05:45:25 no-preload-842269 containerd[555]: time="2025-12-09T05:45:25.688289092Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 09 05:45:25 no-preload-842269 containerd[555]: time="2025-12-09T05:45:25.699337805Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 09 05:45:25 no-preload-842269 containerd[555]: time="2025-12-09T05:45:25.699416343Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 09 05:45:25 no-preload-842269 containerd[555]: time="2025-12-09T05:45:25.699485994Z" level=info msg="Start subscribing containerd event"
	Dec 09 05:45:25 no-preload-842269 containerd[555]: time="2025-12-09T05:45:25.700731392Z" level=info msg="Start recovering state"
	Dec 09 05:45:25 no-preload-842269 containerd[555]: time="2025-12-09T05:45:25.726934659Z" level=info msg="Start event monitor"
	Dec 09 05:45:25 no-preload-842269 containerd[555]: time="2025-12-09T05:45:25.727028597Z" level=info msg="Start cni network conf syncer for default"
	Dec 09 05:45:25 no-preload-842269 containerd[555]: time="2025-12-09T05:45:25.727048600Z" level=info msg="Start streaming server"
	Dec 09 05:45:25 no-preload-842269 containerd[555]: time="2025-12-09T05:45:25.727060752Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 09 05:45:25 no-preload-842269 containerd[555]: time="2025-12-09T05:45:25.727107495Z" level=info msg="runtime interface starting up..."
	Dec 09 05:45:25 no-preload-842269 containerd[555]: time="2025-12-09T05:45:25.727114871Z" level=info msg="starting plugins..."
	Dec 09 05:45:25 no-preload-842269 containerd[555]: time="2025-12-09T05:45:25.727324515Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 09 05:45:25 no-preload-842269 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 09 05:45:25 no-preload-842269 containerd[555]: time="2025-12-09T05:45:25.730766873Z" level=info msg="containerd successfully booted in 0.068739s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 06:05:10.793962   10179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 06:05:10.794728   10179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 06:05:10.796498   10179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 06:05:10.797108   10179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1209 06:05:10.798755   10179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 9 03:18] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:19] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:30] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:32] overlayfs: idmapped layers are currently not supported
	[ +28.114653] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:33] overlayfs: idmapped layers are currently not supported
	[ +23.720849] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:34] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:35] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:36] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:37] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:38] overlayfs: idmapped layers are currently not supported
	[ +23.656275] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:39] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:57] overlayfs: idmapped layers are currently not supported
	[Dec 9 03:58] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:00] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:02] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:03] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:05] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:06] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 9 05:31] overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	
	
	==> kernel <==
	 06:05:10 up  8:47,  0 user,  load average: 2.06, 1.93, 1.55
	Linux no-preload-842269 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 09 06:05:07 no-preload-842269 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 09 06:05:07 no-preload-842269 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1574.
	Dec 09 06:05:07 no-preload-842269 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 06:05:07 no-preload-842269 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 06:05:07 no-preload-842269 kubelet[10043]: E1209 06:05:07.989089   10043 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 09 06:05:07 no-preload-842269 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 09 06:05:07 no-preload-842269 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 09 06:05:08 no-preload-842269 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1575.
	Dec 09 06:05:08 no-preload-842269 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 06:05:08 no-preload-842269 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 06:05:08 no-preload-842269 kubelet[10049]: E1209 06:05:08.725447   10049 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 09 06:05:08 no-preload-842269 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 09 06:05:08 no-preload-842269 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 09 06:05:09 no-preload-842269 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1576.
	Dec 09 06:05:09 no-preload-842269 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 06:05:09 no-preload-842269 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 06:05:09 no-preload-842269 kubelet[10068]: E1209 06:05:09.452415   10068 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 09 06:05:09 no-preload-842269 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 09 06:05:09 no-preload-842269 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 09 06:05:10 no-preload-842269 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1577.
	Dec 09 06:05:10 no-preload-842269 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 06:05:10 no-preload-842269 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 06:05:10 no-preload-842269 kubelet[10091]: E1209 06:05:10.273468   10091 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 09 06:05:10 no-preload-842269 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 09 06:05:10 no-preload-842269 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-842269 -n no-preload-842269
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-842269 -n no-preload-842269: exit status 2 (440.908606ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "no-preload-842269" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (279.90s)

                                                
                                    

Test pass (299/369)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 34.47
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.09
9 TestDownloadOnly/v1.28.0/DeleteAll 0.21
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.34.2/json-events 27.7
13 TestDownloadOnly/v1.34.2/preload-exists 0
17 TestDownloadOnly/v1.34.2/LogsDuration 0.09
18 TestDownloadOnly/v1.34.2/DeleteAll 0.21
19 TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds 0.14
21 TestDownloadOnly/v1.35.0-beta.0/json-events 6.24
22 TestDownloadOnly/v1.35.0-beta.0/preload-exists 0
26 TestDownloadOnly/v1.35.0-beta.0/LogsDuration 0.09
27 TestDownloadOnly/v1.35.0-beta.0/DeleteAll 0.21
28 TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds 0.14
30 TestBinaryMirror 0.63
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.08
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.08
36 TestAddons/Setup 156.45
38 TestAddons/serial/Volcano 40.87
40 TestAddons/serial/GCPAuth/Namespaces 0.17
41 TestAddons/serial/GCPAuth/FakeCredentials 8.82
44 TestAddons/parallel/Registry 16.17
45 TestAddons/parallel/RegistryCreds 0.76
46 TestAddons/parallel/Ingress 17.43
47 TestAddons/parallel/InspektorGadget 12.03
48 TestAddons/parallel/MetricsServer 5.95
50 TestAddons/parallel/CSI 37.68
51 TestAddons/parallel/Headlamp 16.9
52 TestAddons/parallel/CloudSpanner 5.69
53 TestAddons/parallel/LocalPath 9.87
54 TestAddons/parallel/NvidiaDevicePlugin 5.77
55 TestAddons/parallel/Yakd 11.84
57 TestAddons/StoppedEnableDisable 12.34
58 TestCertOptions 35.28
59 TestCertExpiration 233.39
61 TestForceSystemdFlag 37.83
62 TestForceSystemdEnv 43.05
63 TestDockerEnvContainerd 46.38
67 TestErrorSpam/setup 33.32
68 TestErrorSpam/start 0.77
69 TestErrorSpam/status 1.62
70 TestErrorSpam/pause 1.85
71 TestErrorSpam/unpause 1.69
72 TestErrorSpam/stop 1.59
75 TestFunctional/serial/CopySyncFile 0.01
76 TestFunctional/serial/StartWithProxy 78.8
77 TestFunctional/serial/AuditLog 0
78 TestFunctional/serial/SoftStart 6.87
79 TestFunctional/serial/KubeContext 0.06
80 TestFunctional/serial/KubectlGetPods 0.11
83 TestFunctional/serial/CacheCmd/cache/add_remote 3.68
84 TestFunctional/serial/CacheCmd/cache/add_local 1.34
85 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
86 TestFunctional/serial/CacheCmd/cache/list 0.05
87 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.3
88 TestFunctional/serial/CacheCmd/cache/cache_reload 1.92
89 TestFunctional/serial/CacheCmd/cache/delete 0.17
90 TestFunctional/serial/MinikubeKubectlCmd 0.2
91 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.15
92 TestFunctional/serial/ExtraConfig 43.97
93 TestFunctional/serial/ComponentHealth 0.09
94 TestFunctional/serial/LogsCmd 1.49
95 TestFunctional/serial/LogsFileCmd 1.47
96 TestFunctional/serial/InvalidService 4.7
98 TestFunctional/parallel/ConfigCmd 0.5
99 TestFunctional/parallel/DashboardCmd 9.94
100 TestFunctional/parallel/DryRun 0.4
101 TestFunctional/parallel/InternationalLanguage 0.21
102 TestFunctional/parallel/StatusCmd 1.17
106 TestFunctional/parallel/ServiceCmdConnect 8.6
107 TestFunctional/parallel/AddonsCmd 0.14
108 TestFunctional/parallel/PersistentVolumeClaim 21.02
110 TestFunctional/parallel/SSHCmd 0.73
111 TestFunctional/parallel/CpCmd 2.36
113 TestFunctional/parallel/FileSync 0.39
114 TestFunctional/parallel/CertSync 2.2
118 TestFunctional/parallel/NodeLabels 0.1
120 TestFunctional/parallel/NonActiveRuntimeDisabled 0.57
122 TestFunctional/parallel/License 0.26
124 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.62
125 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
127 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.47
128 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.07
129 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
133 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
134 TestFunctional/parallel/ServiceCmd/DeployApp 6.22
135 TestFunctional/parallel/ProfileCmd/profile_not_create 0.44
136 TestFunctional/parallel/ProfileCmd/profile_list 0.42
137 TestFunctional/parallel/ProfileCmd/profile_json_output 0.44
138 TestFunctional/parallel/MountCmd/any-port 8.61
139 TestFunctional/parallel/ServiceCmd/List 0.59
140 TestFunctional/parallel/ServiceCmd/JSONOutput 0.58
141 TestFunctional/parallel/ServiceCmd/HTTPS 0.4
142 TestFunctional/parallel/ServiceCmd/Format 0.4
143 TestFunctional/parallel/ServiceCmd/URL 0.39
144 TestFunctional/parallel/MountCmd/specific-port 2.36
145 TestFunctional/parallel/MountCmd/VerifyCleanup 2.05
146 TestFunctional/parallel/Version/short 0.06
147 TestFunctional/parallel/Version/components 1.4
148 TestFunctional/parallel/ImageCommands/ImageListShort 0.29
149 TestFunctional/parallel/ImageCommands/ImageListTable 0.26
150 TestFunctional/parallel/ImageCommands/ImageListJson 0.29
151 TestFunctional/parallel/ImageCommands/ImageListYaml 0.28
152 TestFunctional/parallel/ImageCommands/ImageBuild 3.86
153 TestFunctional/parallel/ImageCommands/Setup 0.61
154 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.07
155 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.25
156 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.61
157 TestFunctional/parallel/UpdateContextCmd/no_changes 0.2
158 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.22
159 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.19
160 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.41
161 TestFunctional/parallel/ImageCommands/ImageRemove 0.5
162 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.69
163 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.5
164 TestFunctional/delete_echo-server_images 0.04
165 TestFunctional/delete_my-image_image 0.03
166 TestFunctional/delete_minikube_cached_images 0.02
170 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile 0
172 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog 0
174 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext 0.05
178 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote 3.43
179 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local 1.04
180 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete 0.06
181 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list 0.06
182 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node 0.28
183 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload 1.9
184 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete 0.11
189 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd 1.01
190 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd 0.96
193 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd 0.46
195 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun 0.4
196 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage 0.21
202 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd 0.14
205 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd 0.62
206 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd 1.59
208 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync 0.28
209 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync 1.73
215 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled 0.85
217 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License 0.3
220 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short 0.05
221 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components 0.51
222 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel 0.01
226 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort 0.23
227 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable 0.25
228 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson 0.23
229 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml 0.23
230 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild 3.62
231 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup 0.25
232 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon 1.12
233 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon 1.08
234 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon 1.32
235 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile 0.32
236 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove 0.47
237 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile 0.69
238 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon 0.36
239 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes 0.15
240 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster 0.14
241 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters 0.14
243 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port 1.83
244 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup 2.2
248 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel 0.1
255 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create 0.4
256 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list 0.38
257 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output 0.41
258 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images 0.04
259 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image 0.02
260 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images 0.02
264 TestMultiControlPlane/serial/StartCluster 214.89
265 TestMultiControlPlane/serial/DeployApp 7.06
266 TestMultiControlPlane/serial/PingHostFromPods 1.61
267 TestMultiControlPlane/serial/AddWorkerNode 27.82
268 TestMultiControlPlane/serial/NodeLabels 0.13
269 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.06
270 TestMultiControlPlane/serial/CopyFile 19.89
271 TestMultiControlPlane/serial/StopSecondaryNode 12.92
272 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.79
273 TestMultiControlPlane/serial/RestartSecondaryNode 13.45
274 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.38
275 TestMultiControlPlane/serial/RestartClusterKeepsNodes 99.69
276 TestMultiControlPlane/serial/DeleteSecondaryNode 11.22
277 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.77
278 TestMultiControlPlane/serial/StopCluster 36.58
279 TestMultiControlPlane/serial/RestartCluster 58.8
280 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.79
281 TestMultiControlPlane/serial/AddSecondaryNode 79.88
282 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.14
287 TestJSONOutput/start/Command 77.44
288 TestJSONOutput/start/Audit 0
290 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
291 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
293 TestJSONOutput/pause/Command 0.75
294 TestJSONOutput/pause/Audit 0
296 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
297 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
299 TestJSONOutput/unpause/Command 0.63
300 TestJSONOutput/unpause/Audit 0
302 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
303 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
305 TestJSONOutput/stop/Command 5.97
306 TestJSONOutput/stop/Audit 0
308 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
309 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
310 TestErrorJSONOutput 0.23
312 TestKicCustomNetwork/create_custom_network 54.16
313 TestKicCustomNetwork/use_default_bridge_network 36.51
314 TestKicExistingNetwork 37.4
315 TestKicCustomSubnet 34.08
316 TestKicStaticIP 36.16
317 TestMainNoArgs 0.05
318 TestMinikubeProfile 70.63
321 TestMountStart/serial/StartWithMountFirst 8.53
322 TestMountStart/serial/VerifyMountFirst 0.27
323 TestMountStart/serial/StartWithMountSecond 8.66
324 TestMountStart/serial/VerifyMountSecond 0.26
325 TestMountStart/serial/DeleteFirst 1.72
326 TestMountStart/serial/VerifyMountPostDelete 0.28
327 TestMountStart/serial/Stop 1.29
328 TestMountStart/serial/RestartStopped 7.37
329 TestMountStart/serial/VerifyMountPostStop 0.27
332 TestMultiNode/serial/FreshStart2Nodes 107.9
333 TestMultiNode/serial/DeployApp2Nodes 4.96
334 TestMultiNode/serial/PingHostFrom2Pods 1.01
335 TestMultiNode/serial/AddNode 28.08
336 TestMultiNode/serial/MultiNodeLabels 0.09
337 TestMultiNode/serial/ProfileList 0.82
338 TestMultiNode/serial/CopyFile 10.43
339 TestMultiNode/serial/StopNode 2.38
340 TestMultiNode/serial/StartAfterStop 7.88
341 TestMultiNode/serial/RestartKeepsNodes 74.5
342 TestMultiNode/serial/DeleteNode 5.66
343 TestMultiNode/serial/StopMultiNode 24.15
344 TestMultiNode/serial/RestartMultiNode 52.29
345 TestMultiNode/serial/ValidateNameConflict 34.92
350 TestPreload 121.15
352 TestScheduledStopUnix 110.39
355 TestInsufficientStorage 12.15
356 TestRunningBinaryUpgrade 311.55
359 TestMissingContainerUpgrade 163.19
361 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
362 TestNoKubernetes/serial/StartWithK8s 45.34
363 TestNoKubernetes/serial/StartWithStopK8s 8.43
364 TestNoKubernetes/serial/Start 8.59
365 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
366 TestNoKubernetes/serial/VerifyK8sNotRunning 0.32
367 TestNoKubernetes/serial/ProfileList 1.59
368 TestNoKubernetes/serial/Stop 1.44
369 TestNoKubernetes/serial/StartNoArgs 6.69
370 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.29
371 TestStoppedBinaryUpgrade/Setup 2.75
372 TestStoppedBinaryUpgrade/Upgrade 304.06
373 TestStoppedBinaryUpgrade/MinikubeLogs 1.86
382 TestPause/serial/Start 54.82
383 TestPause/serial/SecondStartNoReconfiguration 6.14
384 TestPause/serial/Pause 0.73
385 TestPause/serial/VerifyStatus 0.32
386 TestPause/serial/Unpause 0.63
387 TestPause/serial/PauseAgain 0.82
388 TestPause/serial/DeletePaused 2.78
389 TestPause/serial/VerifyDeletedResources 0.4
402 TestStartStop/group/old-k8s-version/serial/FirstStart 55.39
403 TestStartStop/group/old-k8s-version/serial/DeployApp 9.45
404 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.17
405 TestStartStop/group/old-k8s-version/serial/Stop 12.11
406 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.2
407 TestStartStop/group/old-k8s-version/serial/SecondStart 49.01
408 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
409 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.1
410 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.27
411 TestStartStop/group/old-k8s-version/serial/Pause 3.18
413 TestStartStop/group/embed-certs/serial/FirstStart 83.4
416 TestStartStop/group/embed-certs/serial/DeployApp 9.32
417 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.12
418 TestStartStop/group/embed-certs/serial/Stop 12.11
419 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.2
420 TestStartStop/group/embed-certs/serial/SecondStart 49.38
421 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
422 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.1
423 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.24
424 TestStartStop/group/embed-certs/serial/Pause 3.13
426 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 79.77
427 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.36
428 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.05
429 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.1
430 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.18
431 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 51.87
432 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
433 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.09
434 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.26
435 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.07
440 TestStartStop/group/no-preload/serial/Stop 1.3
441 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.19
443 TestStartStop/group/newest-cni/serial/DeployApp 0
445 TestStartStop/group/newest-cni/serial/Stop 1.31
446 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.18
449 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
450 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
451 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.24
x
+
TestDownloadOnly/v1.28.0/json-events (34.47s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-415470 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-415470 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (34.471820935s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (34.47s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1209 04:07:53.417946 1144231 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
I1209 04:07:53.418029 1144231 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-415470
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-415470: exit status 85 (89.040368ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                         ARGS                                                                                          │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-415470 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-415470 │ jenkins │ v1.37.0 │ 09 Dec 25 04:07 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/09 04:07:18
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 04:07:18.987322 1144236 out.go:360] Setting OutFile to fd 1 ...
	I1209 04:07:18.987449 1144236 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 04:07:18.987459 1144236 out.go:374] Setting ErrFile to fd 2...
	I1209 04:07:18.987464 1144236 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 04:07:18.987700 1144236 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-1142328/.minikube/bin
	W1209 04:07:18.987824 1144236 root.go:314] Error reading config file at /home/jenkins/minikube-integration/22081-1142328/.minikube/config/config.json: open /home/jenkins/minikube-integration/22081-1142328/.minikube/config/config.json: no such file or directory
	I1209 04:07:18.988270 1144236 out.go:368] Setting JSON to true
	I1209 04:07:18.989041 1144236 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":24562,"bootTime":1765228677,"procs":161,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1209 04:07:18.989098 1144236 start.go:143] virtualization:  
	I1209 04:07:18.995142 1144236 out.go:99] [download-only-415470] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	W1209 04:07:18.995333 1144236 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/preloaded-tarball: no such file or directory
	I1209 04:07:18.995458 1144236 notify.go:221] Checking for updates...
	I1209 04:07:18.998550 1144236 out.go:171] MINIKUBE_LOCATION=22081
	I1209 04:07:19.001881 1144236 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 04:07:19.005320 1144236 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22081-1142328/kubeconfig
	I1209 04:07:19.008564 1144236 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-1142328/.minikube
	I1209 04:07:19.011533 1144236 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1209 04:07:19.017420 1144236 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1209 04:07:19.017691 1144236 driver.go:422] Setting default libvirt URI to qemu:///system
	I1209 04:07:19.045777 1144236 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1209 04:07:19.045888 1144236 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 04:07:19.101519 1144236 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-12-09 04:07:19.092605725 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1209 04:07:19.101619 1144236 docker.go:319] overlay module found
	I1209 04:07:19.104626 1144236 out.go:99] Using the docker driver based on user configuration
	I1209 04:07:19.104664 1144236 start.go:309] selected driver: docker
	I1209 04:07:19.104671 1144236 start.go:927] validating driver "docker" against <nil>
	I1209 04:07:19.104778 1144236 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 04:07:19.155661 1144236 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-12-09 04:07:19.147166128 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1209 04:07:19.155823 1144236 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1209 04:07:19.156205 1144236 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1209 04:07:19.156409 1144236 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1209 04:07:19.159587 1144236 out.go:171] Using Docker driver with root privileges
	I1209 04:07:19.162681 1144236 cni.go:84] Creating CNI manager for ""
	I1209 04:07:19.162752 1144236 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1209 04:07:19.162765 1144236 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1209 04:07:19.162858 1144236 start.go:353] cluster config:
	{Name:download-only-415470 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-415470 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 04:07:19.165805 1144236 out.go:99] Starting "download-only-415470" primary control-plane node in "download-only-415470" cluster
	I1209 04:07:19.165821 1144236 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1209 04:07:19.168711 1144236 out.go:99] Pulling base image v0.0.48-1765184860-22066 ...
	I1209 04:07:19.168746 1144236 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1209 04:07:19.168890 1144236 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local docker daemon
	I1209 04:07:19.183751 1144236 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c to local cache
	I1209 04:07:19.183970 1144236 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local cache directory
	I1209 04:07:19.184095 1144236 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c to local cache
	I1209 04:07:19.227434 1144236 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
	I1209 04:07:19.227459 1144236 cache.go:65] Caching tarball of preloaded images
	I1209 04:07:19.227638 1144236 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1209 04:07:19.231084 1144236 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1209 04:07:19.231118 1144236 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4 from gcs api...
	I1209 04:07:19.316496 1144236 preload.go:295] Got checksum from GCS API "38d7f581f2fa4226c8af2c9106b982b7"
	I1209 04:07:19.316625 1144236 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:38d7f581f2fa4226c8af2c9106b982b7 -> /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
	I1209 04:07:25.501422 1144236 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c as a tarball
	
	
	* The control-plane node download-only-415470 host does not exist
	  To start a cluster, run: "minikube start -p download-only-415470"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-415470
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/json-events (27.7s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-731384 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-731384 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (27.69632695s)
--- PASS: TestDownloadOnly/v1.34.2/json-events (27.70s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/preload-exists
I1209 04:08:21.549404 1144231 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
I1209 04:08:21.549438 1144231 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-731384
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-731384: exit status 85 (88.340228ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                         ARGS                                                                                          │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-415470 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-415470 │ jenkins │ v1.37.0 │ 09 Dec 25 04:07 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                 │ minikube             │ jenkins │ v1.37.0 │ 09 Dec 25 04:07 UTC │ 09 Dec 25 04:07 UTC │
	│ delete  │ -p download-only-415470                                                                                                                                                               │ download-only-415470 │ jenkins │ v1.37.0 │ 09 Dec 25 04:07 UTC │ 09 Dec 25 04:07 UTC │
	│ start   │ -o=json --download-only -p download-only-731384 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-731384 │ jenkins │ v1.37.0 │ 09 Dec 25 04:07 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/09 04:07:53
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 04:07:53.895556 1144443 out.go:360] Setting OutFile to fd 1 ...
	I1209 04:07:53.895760 1144443 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 04:07:53.895787 1144443 out.go:374] Setting ErrFile to fd 2...
	I1209 04:07:53.895807 1144443 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 04:07:53.896111 1144443 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-1142328/.minikube/bin
	I1209 04:07:53.896526 1144443 out.go:368] Setting JSON to true
	I1209 04:07:53.897393 1144443 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":24597,"bootTime":1765228677,"procs":156,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1209 04:07:53.897481 1144443 start.go:143] virtualization:  
	I1209 04:07:53.900945 1144443 out.go:99] [download-only-731384] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1209 04:07:53.901312 1144443 notify.go:221] Checking for updates...
	I1209 04:07:53.905049 1144443 out.go:171] MINIKUBE_LOCATION=22081
	I1209 04:07:53.908033 1144443 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 04:07:53.910868 1144443 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22081-1142328/kubeconfig
	I1209 04:07:53.913735 1144443 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-1142328/.minikube
	I1209 04:07:53.916696 1144443 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1209 04:07:53.922536 1144443 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1209 04:07:53.922804 1144443 driver.go:422] Setting default libvirt URI to qemu:///system
	I1209 04:07:53.957816 1144443 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1209 04:07:53.957934 1144443 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 04:07:54.040547 1144443 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:43 SystemTime:2025-12-09 04:07:54.030283586 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1209 04:07:54.040658 1144443 docker.go:319] overlay module found
	I1209 04:07:54.043740 1144443 out.go:99] Using the docker driver based on user configuration
	I1209 04:07:54.043784 1144443 start.go:309] selected driver: docker
	I1209 04:07:54.043792 1144443 start.go:927] validating driver "docker" against <nil>
	I1209 04:07:54.043896 1144443 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 04:07:54.099225 1144443 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:43 SystemTime:2025-12-09 04:07:54.090277104 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1209 04:07:54.099393 1144443 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1209 04:07:54.099665 1144443 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1209 04:07:54.099818 1144443 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1209 04:07:54.103005 1144443 out.go:171] Using Docker driver with root privileges
	I1209 04:07:54.105762 1144443 cni.go:84] Creating CNI manager for ""
	I1209 04:07:54.105834 1144443 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1209 04:07:54.105847 1144443 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1209 04:07:54.105930 1144443 start.go:353] cluster config:
	{Name:download-only-731384 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:download-only-731384 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 04:07:54.108898 1144443 out.go:99] Starting "download-only-731384" primary control-plane node in "download-only-731384" cluster
	I1209 04:07:54.108924 1144443 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1209 04:07:54.111784 1144443 out.go:99] Pulling base image v0.0.48-1765184860-22066 ...
	I1209 04:07:54.111835 1144443 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1209 04:07:54.112007 1144443 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local docker daemon
	I1209 04:07:54.128424 1144443 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c to local cache
	I1209 04:07:54.128561 1144443 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local cache directory
	I1209 04:07:54.128586 1144443 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local cache directory, skipping pull
	I1209 04:07:54.128594 1144443 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c exists in cache, skipping pull
	I1209 04:07:54.128602 1144443 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c as a tarball
	I1209 04:07:54.172226 1144443 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.2/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4
	I1209 04:07:54.172264 1144443 cache.go:65] Caching tarball of preloaded images
	I1209 04:07:54.172480 1144443 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1209 04:07:54.175655 1144443 out.go:99] Downloading Kubernetes v1.34.2 preload ...
	I1209 04:07:54.175689 1144443 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4 from gcs api...
	I1209 04:07:54.256384 1144443 preload.go:295] Got checksum from GCS API "cd1a05d5493c9270e248bf47fb3f071d"
	I1209 04:07:54.256455 1144443 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.2/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4?checksum=md5:cd1a05d5493c9270e248bf47fb3f071d -> /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-731384 host does not exist
	  To start a cluster, run: "minikube start -p download-only-731384"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.2/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.34.2/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-731384
--- PASS: TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/json-events (6.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-830324 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-830324 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (6.243635743s)
--- PASS: TestDownloadOnly/v1.35.0-beta.0/json-events (6.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/preload-exists
I1209 04:08:28.227806 1144231 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
I1209 04:08:28.227839 1144231 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.35.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-830324
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-830324: exit status 85 (85.746121ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                             ARGS                                                                                             │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-415470 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd        │ download-only-415470 │ jenkins │ v1.37.0 │ 09 Dec 25 04:07 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                        │ minikube             │ jenkins │ v1.37.0 │ 09 Dec 25 04:07 UTC │ 09 Dec 25 04:07 UTC │
	│ delete  │ -p download-only-415470                                                                                                                                                                      │ download-only-415470 │ jenkins │ v1.37.0 │ 09 Dec 25 04:07 UTC │ 09 Dec 25 04:07 UTC │
	│ start   │ -o=json --download-only -p download-only-731384 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=containerd --driver=docker  --container-runtime=containerd        │ download-only-731384 │ jenkins │ v1.37.0 │ 09 Dec 25 04:07 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                        │ minikube             │ jenkins │ v1.37.0 │ 09 Dec 25 04:08 UTC │ 09 Dec 25 04:08 UTC │
	│ delete  │ -p download-only-731384                                                                                                                                                                      │ download-only-731384 │ jenkins │ v1.37.0 │ 09 Dec 25 04:08 UTC │ 09 Dec 25 04:08 UTC │
	│ start   │ -o=json --download-only -p download-only-830324 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-830324 │ jenkins │ v1.37.0 │ 09 Dec 25 04:08 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/09 04:08:22
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 04:08:22.029738 1144642 out.go:360] Setting OutFile to fd 1 ...
	I1209 04:08:22.029932 1144642 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 04:08:22.029965 1144642 out.go:374] Setting ErrFile to fd 2...
	I1209 04:08:22.029986 1144642 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 04:08:22.030280 1144642 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-1142328/.minikube/bin
	I1209 04:08:22.030756 1144642 out.go:368] Setting JSON to true
	I1209 04:08:22.031633 1144642 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":24625,"bootTime":1765228677,"procs":156,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1209 04:08:22.031736 1144642 start.go:143] virtualization:  
	I1209 04:08:22.035053 1144642 out.go:99] [download-only-830324] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1209 04:08:22.035414 1144642 notify.go:221] Checking for updates...
	I1209 04:08:22.039133 1144642 out.go:171] MINIKUBE_LOCATION=22081
	I1209 04:08:22.042256 1144642 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 04:08:22.045760 1144642 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22081-1142328/kubeconfig
	I1209 04:08:22.048671 1144642 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-1142328/.minikube
	I1209 04:08:22.051564 1144642 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1209 04:08:22.057251 1144642 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1209 04:08:22.057541 1144642 driver.go:422] Setting default libvirt URI to qemu:///system
	I1209 04:08:22.090714 1144642 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1209 04:08:22.090846 1144642 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 04:08:22.157206 1144642 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:43 SystemTime:2025-12-09 04:08:22.148383985 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1209 04:08:22.157316 1144642 docker.go:319] overlay module found
	I1209 04:08:22.160312 1144642 out.go:99] Using the docker driver based on user configuration
	I1209 04:08:22.160355 1144642 start.go:309] selected driver: docker
	I1209 04:08:22.160370 1144642 start.go:927] validating driver "docker" against <nil>
	I1209 04:08:22.160482 1144642 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 04:08:22.221308 1144642 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:43 SystemTime:2025-12-09 04:08:22.212534471 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1209 04:08:22.221474 1144642 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1209 04:08:22.221766 1144642 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1209 04:08:22.221924 1144642 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1209 04:08:22.224978 1144642 out.go:171] Using Docker driver with root privileges
	I1209 04:08:22.227816 1144642 cni.go:84] Creating CNI manager for ""
	I1209 04:08:22.227906 1144642 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1209 04:08:22.227919 1144642 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1209 04:08:22.228223 1144642 start.go:353] cluster config:
	{Name:download-only-830324 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:download-only-830324 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 04:08:22.231223 1144642 out.go:99] Starting "download-only-830324" primary control-plane node in "download-only-830324" cluster
	I1209 04:08:22.231252 1144642 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1209 04:08:22.234179 1144642 out.go:99] Pulling base image v0.0.48-1765184860-22066 ...
	I1209 04:08:22.234263 1144642 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1209 04:08:22.234344 1144642 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local docker daemon
	I1209 04:08:22.250374 1144642 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c to local cache
	I1209 04:08:22.250499 1144642 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local cache directory
	I1209 04:08:22.250518 1144642 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local cache directory, skipping pull
	I1209 04:08:22.250522 1144642 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c exists in cache, skipping pull
	I1209 04:08:22.250528 1144642 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c as a tarball
	I1209 04:08:22.294020 1144642 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1209 04:08:22.294063 1144642 cache.go:65] Caching tarball of preloaded images
	I1209 04:08:22.294250 1144642 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1209 04:08:22.297274 1144642 out.go:99] Downloading Kubernetes v1.35.0-beta.0 preload ...
	I1209 04:08:22.297314 1144642 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 from gcs api...
	I1209 04:08:22.382257 1144642 preload.go:295] Got checksum from GCS API "4ead9b9dbba82a7ecb06a001f1ffeaf3"
	I1209 04:08:22.382309 1144642 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:4ead9b9dbba82a7ecb06a001f1ffeaf3 -> /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-830324 host does not exist
	  To start a cluster, run: "minikube start -p download-only-830324"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-830324
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.63s)

                                                
                                                
=== RUN   TestBinaryMirror
I1209 04:08:29.484856 1144231 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-411967 --alsologtostderr --binary-mirror http://127.0.0.1:44777 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-411967" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-411967
--- PASS: TestBinaryMirror (0.63s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1060: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-221952
addons_test.go:1060: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-221952: exit status 85 (77.186852ms)

                                                
                                                
-- stdout --
	* Profile "addons-221952" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-221952"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1071: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-221952
addons_test.go:1071: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-221952: exit status 85 (77.057904ms)

                                                
                                                
-- stdout --
	* Profile "addons-221952" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-221952"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/Setup (156.45s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:113: (dbg) Run:  out/minikube-linux-arm64 start -p addons-221952 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:113: (dbg) Done: out/minikube-linux-arm64 start -p addons-221952 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m36.450856615s)
--- PASS: TestAddons/Setup (156.45s)

                                                
                                    
x
+
TestAddons/serial/Volcano (40.87s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:936: volcano-admission stabilized in 53.276821ms
addons_test.go:944: volcano-controller stabilized in 54.461774ms
addons_test.go:928: volcano-scheduler stabilized in 54.954483ms
addons_test.go:950: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-scheduler-76c996c8bf-p7trn" [b0c0c578-7f6c-47fc-b656-ababd5ac849e] Running
addons_test.go:950: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.002897954s
addons_test.go:954: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-admission-6c447bd768-f49hk" [935188a1-06bc-44bc-bcec-9423ea70a703] Running
addons_test.go:954: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.003390657s
addons_test.go:958: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-controllers-6fd4f85cb8-hjl2p" [31360a58-af80-4203-953b-d41c3e8d385f] Running
addons_test.go:958: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003236306s
addons_test.go:963: (dbg) Run:  kubectl --context addons-221952 delete -n volcano-system job volcano-admission-init
addons_test.go:969: (dbg) Run:  kubectl --context addons-221952 create -f testdata/vcjob.yaml
addons_test.go:977: (dbg) Run:  kubectl --context addons-221952 get vcjob -n my-volcano
addons_test.go:995: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:352: "test-job-nginx-0" [a469dd83-bdf7-4a1e-94ee-16f8d58e4388] Pending
helpers_test.go:352: "test-job-nginx-0" [a469dd83-bdf7-4a1e-94ee-16f8d58e4388] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "test-job-nginx-0" [a469dd83-bdf7-4a1e-94ee-16f8d58e4388] Running
addons_test.go:995: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 12.003877954s
addons_test.go:1113: (dbg) Run:  out/minikube-linux-arm64 -p addons-221952 addons disable volcano --alsologtostderr -v=1
addons_test.go:1113: (dbg) Done: out/minikube-linux-arm64 -p addons-221952 addons disable volcano --alsologtostderr -v=1: (12.184210824s)
--- PASS: TestAddons/serial/Volcano (40.87s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:690: (dbg) Run:  kubectl --context addons-221952 create ns new-namespace
addons_test.go:704: (dbg) Run:  kubectl --context addons-221952 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (8.82s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:735: (dbg) Run:  kubectl --context addons-221952 create -f testdata/busybox.yaml
addons_test.go:742: (dbg) Run:  kubectl --context addons-221952 create sa gcp-auth-test
addons_test.go:748: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [c3e7d501-d909-4feb-b760-f3c871b245ff] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [c3e7d501-d909-4feb-b760-f3c871b245ff] Running
addons_test.go:748: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 8.003427822s
addons_test.go:754: (dbg) Run:  kubectl --context addons-221952 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:766: (dbg) Run:  kubectl --context addons-221952 describe sa gcp-auth-test
addons_test.go:780: (dbg) Run:  kubectl --context addons-221952 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:804: (dbg) Run:  kubectl --context addons-221952 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (8.82s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.17s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:442: registry stabilized in 5.493978ms
addons_test.go:444: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-96x6t" [35b411cc-6b41-4017-8fb1-ca0b1a0d4b48] Running
addons_test.go:444: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.00399215s
addons_test.go:447: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-nvr8n" [433e8337-cfc9-46bc-81c6-e9790d4d9583] Running
addons_test.go:447: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003783021s
addons_test.go:452: (dbg) Run:  kubectl --context addons-221952 delete po -l run=registry-test --now
addons_test.go:457: (dbg) Run:  kubectl --context addons-221952 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:457: (dbg) Done: kubectl --context addons-221952 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.149740303s)
addons_test.go:471: (dbg) Run:  out/minikube-linux-arm64 -p addons-221952 ip
2025/12/09 04:12:21 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1113: (dbg) Run:  out/minikube-linux-arm64 -p addons-221952 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.17s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.76s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:383: registry-creds stabilized in 3.368274ms
addons_test.go:385: (dbg) Run:  out/minikube-linux-arm64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-221952
addons_test.go:392: (dbg) Run:  kubectl --context addons-221952 -n kube-system get secret -o yaml
addons_test.go:1113: (dbg) Run:  out/minikube-linux-arm64 -p addons-221952 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.76s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (17.43s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:269: (dbg) Run:  kubectl --context addons-221952 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:294: (dbg) Run:  kubectl --context addons-221952 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:307: (dbg) Run:  kubectl --context addons-221952 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:312: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [07448f51-016e-4f4f-b14b-8948119edc2d] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [07448f51-016e-4f4f-b14b-8948119edc2d] Running
addons_test.go:312: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 6.003345652s
I1209 04:13:10.810860 1144231 kapi.go:150] Service nginx in namespace default found.
addons_test.go:324: (dbg) Run:  out/minikube-linux-arm64 -p addons-221952 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:348: (dbg) Run:  kubectl --context addons-221952 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:353: (dbg) Run:  out/minikube-linux-arm64 -p addons-221952 ip
addons_test.go:359: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:1113: (dbg) Run:  out/minikube-linux-arm64 -p addons-221952 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1113: (dbg) Done: out/minikube-linux-arm64 -p addons-221952 addons disable ingress-dns --alsologtostderr -v=1: (1.767496195s)
addons_test.go:1113: (dbg) Run:  out/minikube-linux-arm64 -p addons-221952 addons disable ingress --alsologtostderr -v=1
addons_test.go:1113: (dbg) Done: out/minikube-linux-arm64 -p addons-221952 addons disable ingress --alsologtostderr -v=1: (7.833233347s)
--- PASS: TestAddons/parallel/Ingress (17.43s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (12.03s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:883: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-tbm2p" [a673b4ec-dcc6-4f52-ac1b-c09012b219b9] Running
addons_test.go:883: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003366529s
addons_test.go:1113: (dbg) Run:  out/minikube-linux-arm64 -p addons-221952 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1113: (dbg) Done: out/minikube-linux-arm64 -p addons-221952 addons disable inspektor-gadget --alsologtostderr -v=1: (6.028437631s)
--- PASS: TestAddons/parallel/InspektorGadget (12.03s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.95s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:515: metrics-server stabilized in 3.77384ms
addons_test.go:517: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-27wmt" [fc607acb-7f5f-495e-b31f-cc55540ffe5c] Running
addons_test.go:517: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.012917579s
addons_test.go:523: (dbg) Run:  kubectl --context addons-221952 top pods -n kube-system
addons_test.go:1113: (dbg) Run:  out/minikube-linux-arm64 -p addons-221952 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.95s)

                                                
                                    
x
+
TestAddons/parallel/CSI (37.68s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1209 04:12:31.939501 1144231 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1209 04:12:31.942909 1144231 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1209 04:12:31.942932 1144231 kapi.go:107] duration metric: took 5.326494ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:609: csi-hostpath-driver pods stabilized in 5.33826ms
addons_test.go:612: (dbg) Run:  kubectl --context addons-221952 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-221952 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-221952 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-221952 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-221952 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-221952 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [a37794b7-da98-4b7a-b013-a523065746a6] Pending
helpers_test.go:352: "task-pv-pod" [a37794b7-da98-4b7a-b013-a523065746a6] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [a37794b7-da98-4b7a-b013-a523065746a6] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.003511173s
addons_test.go:632: (dbg) Run:  kubectl --context addons-221952 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:637: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-221952 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-221952 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:642: (dbg) Run:  kubectl --context addons-221952 delete pod task-pv-pod
addons_test.go:648: (dbg) Run:  kubectl --context addons-221952 delete pvc hpvc
addons_test.go:654: (dbg) Run:  kubectl --context addons-221952 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:659: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-221952 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-221952 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-221952 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-221952 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-221952 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-221952 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-221952 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:664: (dbg) Run:  kubectl --context addons-221952 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:669: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [98bea87d-b8ea-4cf0-811b-5748b468111f] Pending
helpers_test.go:352: "task-pv-pod-restore" [98bea87d-b8ea-4cf0-811b-5748b468111f] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [98bea87d-b8ea-4cf0-811b-5748b468111f] Running
addons_test.go:669: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.018329559s
addons_test.go:674: (dbg) Run:  kubectl --context addons-221952 delete pod task-pv-pod-restore
addons_test.go:674: (dbg) Done: kubectl --context addons-221952 delete pod task-pv-pod-restore: (1.355297462s)
addons_test.go:678: (dbg) Run:  kubectl --context addons-221952 delete pvc hpvc-restore
addons_test.go:682: (dbg) Run:  kubectl --context addons-221952 delete volumesnapshot new-snapshot-demo
addons_test.go:1113: (dbg) Run:  out/minikube-linux-arm64 -p addons-221952 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1113: (dbg) Run:  out/minikube-linux-arm64 -p addons-221952 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1113: (dbg) Done: out/minikube-linux-arm64 -p addons-221952 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.842700091s)
--- PASS: TestAddons/parallel/CSI (37.68s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (16.9s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:868: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-221952 --alsologtostderr -v=1
addons_test.go:868: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-221952 --alsologtostderr -v=1: (1.095894559s)
addons_test.go:873: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-dfcdc64b-l8w4q" [19ff172c-e8f7-4261-9653-d19aacaa1d63] Pending
helpers_test.go:352: "headlamp-dfcdc64b-l8w4q" [19ff172c-e8f7-4261-9653-d19aacaa1d63] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-dfcdc64b-l8w4q" [19ff172c-e8f7-4261-9653-d19aacaa1d63] Running
addons_test.go:873: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.00373657s
addons_test.go:1113: (dbg) Run:  out/minikube-linux-arm64 -p addons-221952 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1113: (dbg) Done: out/minikube-linux-arm64 -p addons-221952 addons disable headlamp --alsologtostderr -v=1: (5.797397594s)
--- PASS: TestAddons/parallel/Headlamp (16.90s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.69s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:900: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-5bdddb765-pq26h" [e260107a-fb06-4462-b7bb-d974688f7764] Running
addons_test.go:900: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003140779s
addons_test.go:1113: (dbg) Run:  out/minikube-linux-arm64 -p addons-221952 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.69s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (9.87s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:1009: (dbg) Run:  kubectl --context addons-221952 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:1015: (dbg) Run:  kubectl --context addons-221952 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:1019: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-221952 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-221952 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-221952 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-221952 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-221952 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-221952 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:1022: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [1167153d-6da6-45c0-aaa5-1c6aec03b160] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [1167153d-6da6-45c0-aaa5-1c6aec03b160] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [1167153d-6da6-45c0-aaa5-1c6aec03b160] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:1022: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003536483s
addons_test.go:1027: (dbg) Run:  kubectl --context addons-221952 get pvc test-pvc -o=json
addons_test.go:1036: (dbg) Run:  out/minikube-linux-arm64 -p addons-221952 ssh "cat /opt/local-path-provisioner/pvc-be1da719-2ea3-4ea2-b5d9-52058c9247c4_default_test-pvc/file1"
addons_test.go:1048: (dbg) Run:  kubectl --context addons-221952 delete pod test-local-path
addons_test.go:1052: (dbg) Run:  kubectl --context addons-221952 delete pvc test-pvc
addons_test.go:1113: (dbg) Run:  out/minikube-linux-arm64 -p addons-221952 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (9.87s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.77s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1085: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-s7q9v" [7be5f0fc-3912-4e09-8f37-6bd4dce17f08] Running
addons_test.go:1085: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.005595965s
addons_test.go:1113: (dbg) Run:  out/minikube-linux-arm64 -p addons-221952 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.77s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.84s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1107: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-cpzdh" [b7ee7498-3164-4986-bcc8-6913157aff2c] Running
addons_test.go:1107: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004127062s
addons_test.go:1113: (dbg) Run:  out/minikube-linux-arm64 -p addons-221952 addons disable yakd --alsologtostderr -v=1
addons_test.go:1113: (dbg) Done: out/minikube-linux-arm64 -p addons-221952 addons disable yakd --alsologtostderr -v=1: (5.83703514s)
--- PASS: TestAddons/parallel/Yakd (11.84s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.34s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:177: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-221952
addons_test.go:177: (dbg) Done: out/minikube-linux-arm64 stop -p addons-221952: (12.066352126s)
addons_test.go:181: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-221952
addons_test.go:185: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-221952
addons_test.go:190: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-221952
--- PASS: TestAddons/StoppedEnableDisable (12.34s)

                                                
                                    
x
+
TestCertOptions (35.28s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-936624 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-936624 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (32.457580185s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-936624 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-936624 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-936624 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-936624" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-936624
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-936624: (2.08429328s)
--- PASS: TestCertOptions (35.28s)

                                                
                                    
x
+
TestCertExpiration (233.39s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-074045 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-074045 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (41.561746844s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-074045 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-074045 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (9.183953366s)
helpers_test.go:175: Cleaning up "cert-expiration-074045" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-074045
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-074045: (2.645933035s)
--- PASS: TestCertExpiration (233.39s)

                                                
                                    
x
+
TestForceSystemdFlag (37.83s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-288240 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-288240 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (34.877387085s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-288240 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-288240" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-288240
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-288240: (2.566251774s)
--- PASS: TestForceSystemdFlag (37.83s)

                                                
                                    
x
+
TestForceSystemdEnv (43.05s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-572570 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-572570 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (39.761782078s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-572570 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-572570" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-572570
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-572570: (2.837324232s)
--- PASS: TestForceSystemdEnv (43.05s)

                                                
                                    
x
+
TestDockerEnvContainerd (46.38s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux arm64
docker_test.go:181: (dbg) Run:  out/minikube-linux-arm64 start -p dockerenv-184653 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-arm64 start -p dockerenv-184653 --driver=docker  --container-runtime=containerd: (30.651917397s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-184653"
docker_test.go:189: (dbg) Done: /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-184653": (1.051945383s)
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-UNvQJgKSNLmB/agent.1163094" SSH_AGENT_PID="1163095" DOCKER_HOST=ssh://docker@127.0.0.1:33885 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-UNvQJgKSNLmB/agent.1163094" SSH_AGENT_PID="1163095" DOCKER_HOST=ssh://docker@127.0.0.1:33885 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-UNvQJgKSNLmB/agent.1163094" SSH_AGENT_PID="1163095" DOCKER_HOST=ssh://docker@127.0.0.1:33885 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (1.227783178s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-UNvQJgKSNLmB/agent.1163094" SSH_AGENT_PID="1163095" DOCKER_HOST=ssh://docker@127.0.0.1:33885 docker image ls"
helpers_test.go:175: Cleaning up "dockerenv-184653" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p dockerenv-184653
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p dockerenv-184653: (2.057857502s)
--- PASS: TestDockerEnvContainerd (46.38s)

                                                
                                    
x
+
TestErrorSpam/setup (33.32s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-834113 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-834113 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-834113 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-834113 --driver=docker  --container-runtime=containerd: (33.31658015s)
--- PASS: TestErrorSpam/setup (33.32s)

                                                
                                    
x
+
TestErrorSpam/start (0.77s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-834113 --log_dir /tmp/nospam-834113 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-834113 --log_dir /tmp/nospam-834113 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-834113 --log_dir /tmp/nospam-834113 start --dry-run
--- PASS: TestErrorSpam/start (0.77s)

                                                
                                    
x
+
TestErrorSpam/status (1.62s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-834113 --log_dir /tmp/nospam-834113 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-834113 --log_dir /tmp/nospam-834113 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-834113 --log_dir /tmp/nospam-834113 status
--- PASS: TestErrorSpam/status (1.62s)

                                                
                                    
x
+
TestErrorSpam/pause (1.85s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-834113 --log_dir /tmp/nospam-834113 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-834113 --log_dir /tmp/nospam-834113 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-834113 --log_dir /tmp/nospam-834113 pause
--- PASS: TestErrorSpam/pause (1.85s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.69s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-834113 --log_dir /tmp/nospam-834113 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-834113 --log_dir /tmp/nospam-834113 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-834113 --log_dir /tmp/nospam-834113 unpause
--- PASS: TestErrorSpam/unpause (1.69s)

                                                
                                    
x
+
TestErrorSpam/stop (1.59s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-834113 --log_dir /tmp/nospam-834113 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-arm64 -p nospam-834113 --log_dir /tmp/nospam-834113 stop: (1.38424335s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-834113 --log_dir /tmp/nospam-834113 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-834113 --log_dir /tmp/nospam-834113 stop
--- PASS: TestErrorSpam/stop (1.59s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0.01s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/test/nested/copy/1144231/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.01s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (78.8s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-717497 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
E1209 04:16:06.742852 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/addons-221952/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 04:16:06.749358 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/addons-221952/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 04:16:06.760845 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/addons-221952/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 04:16:06.782317 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/addons-221952/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 04:16:06.823776 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/addons-221952/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 04:16:06.905185 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/addons-221952/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 04:16:07.066658 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/addons-221952/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 04:16:07.388298 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/addons-221952/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 04:16:08.030327 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/addons-221952/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 04:16:09.312044 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/addons-221952/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 04:16:11.874398 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/addons-221952/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 04:16:16.996581 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/addons-221952/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 04:16:27.238057 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/addons-221952/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-arm64 start -p functional-717497 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (1m18.802356537s)
--- PASS: TestFunctional/serial/StartWithProxy (78.80s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.87s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1209 04:16:30.519397 1144231 config.go:182] Loaded profile config "functional-717497": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-717497 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-arm64 start -p functional-717497 --alsologtostderr -v=8: (6.863796839s)
functional_test.go:678: soft start took 6.865945522s for "functional-717497" cluster.
I1209 04:16:37.383521 1144231 config.go:182] Loaded profile config "functional-717497": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/SoftStart (6.87s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-717497 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.68s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-717497 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-717497 cache add registry.k8s.io/pause:3.1: (1.421896122s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-717497 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-717497 cache add registry.k8s.io/pause:3.3: (1.208595577s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-717497 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-717497 cache add registry.k8s.io/pause:latest: (1.04987808s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.68s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.34s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-717497 /tmp/TestFunctionalserialCacheCmdcacheadd_local2188597587/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-717497 cache add minikube-local-cache-test:functional-717497
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-717497 cache delete minikube-local-cache-test:functional-717497
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-717497
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.34s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-717497 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.92s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-717497 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-717497 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-717497 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (317.866381ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-717497 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-717497 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.92s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.17s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.17s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.2s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-717497 kubectl -- --context functional-717497 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.20s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-717497 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.15s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (43.97s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-717497 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1209 04:16:47.719466 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/addons-221952/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 04:17:28.682119 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/addons-221952/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-arm64 start -p functional-717497 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (43.973643145s)
functional_test.go:776: restart took 43.973735851s for "functional-717497" cluster.
I1209 04:17:29.396532 1144231 config.go:182] Loaded profile config "functional-717497": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/ExtraConfig (43.97s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-717497 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.09s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.49s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-717497 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-arm64 -p functional-717497 logs: (1.492131907s)
--- PASS: TestFunctional/serial/LogsCmd (1.49s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.47s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-717497 logs --file /tmp/TestFunctionalserialLogsFileCmd490679318/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-arm64 -p functional-717497 logs --file /tmp/TestFunctionalserialLogsFileCmd490679318/001/logs.txt: (1.464678007s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.47s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.7s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-717497 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-717497
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-717497: exit status 115 (388.118479ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:31066 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-717497 delete -f testdata/invalidsvc.yaml
functional_test.go:2332: (dbg) Done: kubectl --context functional-717497 delete -f testdata/invalidsvc.yaml: (1.053902671s)
--- PASS: TestFunctional/serial/InvalidService (4.70s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-717497 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-717497 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-717497 config get cpus: exit status 14 (106.86729ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-717497 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-717497 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-717497 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-717497 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-717497 config get cpus: exit status 14 (73.075626ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (9.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-717497 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-717497 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 1178295: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (9.94s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-717497 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-717497 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (177.674626ms)

                                                
                                                
-- stdout --
	* [functional-717497] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22081
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22081-1142328/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-1142328/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 04:18:07.384832 1177875 out.go:360] Setting OutFile to fd 1 ...
	I1209 04:18:07.385048 1177875 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 04:18:07.385079 1177875 out.go:374] Setting ErrFile to fd 2...
	I1209 04:18:07.385099 1177875 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 04:18:07.385403 1177875 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-1142328/.minikube/bin
	I1209 04:18:07.385798 1177875 out.go:368] Setting JSON to false
	I1209 04:18:07.386774 1177875 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":25211,"bootTime":1765228677,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1209 04:18:07.386869 1177875 start.go:143] virtualization:  
	I1209 04:18:07.389883 1177875 out.go:179] * [functional-717497] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1209 04:18:07.392806 1177875 out.go:179]   - MINIKUBE_LOCATION=22081
	I1209 04:18:07.392937 1177875 notify.go:221] Checking for updates...
	I1209 04:18:07.398485 1177875 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 04:18:07.401373 1177875 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22081-1142328/kubeconfig
	I1209 04:18:07.404191 1177875 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-1142328/.minikube
	I1209 04:18:07.407086 1177875 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1209 04:18:07.410051 1177875 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 04:18:07.413385 1177875 config.go:182] Loaded profile config "functional-717497": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1209 04:18:07.414034 1177875 driver.go:422] Setting default libvirt URI to qemu:///system
	I1209 04:18:07.433630 1177875 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1209 04:18:07.433746 1177875 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 04:18:07.496541 1177875 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-09 04:18:07.487224279 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1209 04:18:07.496650 1177875 docker.go:319] overlay module found
	I1209 04:18:07.499732 1177875 out.go:179] * Using the docker driver based on existing profile
	I1209 04:18:07.502763 1177875 start.go:309] selected driver: docker
	I1209 04:18:07.502787 1177875 start.go:927] validating driver "docker" against &{Name:functional-717497 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-717497 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 04:18:07.502887 1177875 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 04:18:07.506556 1177875 out.go:203] 
	W1209 04:18:07.509450 1177875 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1209 04:18:07.512331 1177875 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-717497 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-717497 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-717497 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (207.412932ms)

                                                
                                                
-- stdout --
	* [functional-717497] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22081
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22081-1142328/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-1142328/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 04:18:07.187650 1177829 out.go:360] Setting OutFile to fd 1 ...
	I1209 04:18:07.187824 1177829 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 04:18:07.187837 1177829 out.go:374] Setting ErrFile to fd 2...
	I1209 04:18:07.187842 1177829 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 04:18:07.188927 1177829 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-1142328/.minikube/bin
	I1209 04:18:07.189306 1177829 out.go:368] Setting JSON to false
	I1209 04:18:07.190292 1177829 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":25211,"bootTime":1765228677,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1209 04:18:07.190364 1177829 start.go:143] virtualization:  
	I1209 04:18:07.193454 1177829 out.go:179] * [functional-717497] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1209 04:18:07.197194 1177829 out.go:179]   - MINIKUBE_LOCATION=22081
	I1209 04:18:07.197361 1177829 notify.go:221] Checking for updates...
	I1209 04:18:07.202713 1177829 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 04:18:07.205471 1177829 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22081-1142328/kubeconfig
	I1209 04:18:07.208301 1177829 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-1142328/.minikube
	I1209 04:18:07.211145 1177829 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1209 04:18:07.214009 1177829 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 04:18:07.217367 1177829 config.go:182] Loaded profile config "functional-717497": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1209 04:18:07.218102 1177829 driver.go:422] Setting default libvirt URI to qemu:///system
	I1209 04:18:07.256533 1177829 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1209 04:18:07.256643 1177829 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 04:18:07.319693 1177829 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-09 04:18:07.310196755 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1209 04:18:07.319799 1177829 docker.go:319] overlay module found
	I1209 04:18:07.322773 1177829 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1209 04:18:07.325533 1177829 start.go:309] selected driver: docker
	I1209 04:18:07.325554 1177829 start.go:927] validating driver "docker" against &{Name:functional-717497 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-717497 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 04:18:07.325666 1177829 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 04:18:07.329087 1177829 out.go:203] 
	W1209 04:18:07.331927 1177829 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1209 04:18:07.334710 1177829 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-717497 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-717497 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-717497 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.17s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-717497 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-717497 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-bcq4k" [ce6fa3c7-9ec7-4f15-b644-a194acb50caf] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-connect-7d85dfc575-bcq4k" [ce6fa3c7-9ec7-4f15-b644-a194acb50caf] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.004118036s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-arm64 -p functional-717497 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.49.2:31155
functional_test.go:1680: http://192.168.49.2:31155: success! body:
Request served by hello-node-connect-7d85dfc575-bcq4k

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:31155
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.60s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-717497 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-717497 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (21.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [72585e90-dfba-4a33-97af-601e09e9dd0b] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003773277s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-717497 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-717497 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-717497 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-717497 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [721325ac-485a-4a28-83c7-738d56420562] Pending
helpers_test.go:352: "sp-pod" [721325ac-485a-4a28-83c7-738d56420562] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [721325ac-485a-4a28-83c7-738d56420562] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003568906s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-717497 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-717497 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-717497 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [3d340402-0a46-44cd-a30b-91cf7f8efee7] Pending
helpers_test.go:352: "sp-pod" [3d340402-0a46-44cd-a30b-91cf7f8efee7] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [3d340402-0a46-44cd-a30b-91cf7f8efee7] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003863024s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-717497 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (21.02s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-717497 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-717497 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.73s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-717497 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-717497 ssh -n functional-717497 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-717497 cp functional-717497:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2582294113/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-717497 ssh -n functional-717497 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-717497 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-717497 ssh -n functional-717497 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.36s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/1144231/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-717497 ssh "sudo cat /etc/test/nested/copy/1144231/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/1144231.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-717497 ssh "sudo cat /etc/ssl/certs/1144231.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/1144231.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-717497 ssh "sudo cat /usr/share/ca-certificates/1144231.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-717497 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/11442312.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-717497 ssh "sudo cat /etc/ssl/certs/11442312.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/11442312.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-717497 ssh "sudo cat /usr/share/ca-certificates/11442312.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-717497 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.20s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-717497 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-717497 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-717497 ssh "sudo systemctl is-active docker": exit status 1 (293.60456ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-717497 ssh "sudo systemctl is-active crio"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-717497 ssh "sudo systemctl is-active crio": exit status 1 (276.541194ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-717497 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-717497 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-717497 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 1175463: os: process already finished
helpers_test.go:519: unable to terminate pid 1175272: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-717497 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-717497 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-717497 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [c1d15388-cd89-4740-9ac9-c93665033271] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [c1d15388-cd89-4740-9ac9-c93665033271] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.003593467s
I1209 04:17:48.457063 1144231 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.47s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-717497 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.98.178.96 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-717497 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (6.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-717497 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-717497 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-sdgm9" [ed8fe0f6-ffde-4807-bae0-f422d4ff5cd2] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-75c85bcc94-sdgm9" [ed8fe0f6-ffde-4807-bae0-f422d4ff5cd2] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 6.003754009s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (6.22s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "363.443569ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "53.185058ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "362.738106ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "74.285827ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-717497 /tmp/TestFunctionalparallelMountCmdany-port903652663/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1765253881814241326" to /tmp/TestFunctionalparallelMountCmdany-port903652663/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1765253881814241326" to /tmp/TestFunctionalparallelMountCmdany-port903652663/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1765253881814241326" to /tmp/TestFunctionalparallelMountCmdany-port903652663/001/test-1765253881814241326
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-717497 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-717497 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (376.748357ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1209 04:18:02.192654 1144231 retry.go:31] will retry after 709.791857ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-717497 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-717497 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec  9 04:18 created-by-test
-rw-r--r-- 1 docker docker 24 Dec  9 04:18 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec  9 04:18 test-1765253881814241326
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-717497 ssh cat /mount-9p/test-1765253881814241326
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-717497 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [d929557b-2c24-47b5-9015-67cd1ce9b393] Pending
helpers_test.go:352: "busybox-mount" [d929557b-2c24-47b5-9015-67cd1ce9b393] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [d929557b-2c24-47b5-9015-67cd1ce9b393] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [d929557b-2c24-47b5-9015-67cd1ce9b393] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.016935983s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-717497 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-717497 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-717497 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-717497 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-717497 /tmp/TestFunctionalparallelMountCmdany-port903652663/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.61s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-717497 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-717497 service list -o json
functional_test.go:1504: Took "577.449698ms" to run "out/minikube-linux-arm64 -p functional-717497 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-717497 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.49.2:31223
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-717497 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-717497 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:31223
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-717497 /tmp/TestFunctionalparallelMountCmdspecific-port1119829249/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-717497 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-717497 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (570.369573ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1209 04:18:10.993206 1144231 retry.go:31] will retry after 537.376444ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-717497 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-717497 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-717497 /tmp/TestFunctionalparallelMountCmdspecific-port1119829249/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-717497 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-717497 ssh "sudo umount -f /mount-9p": exit status 1 (343.57257ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-717497 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-717497 /tmp/TestFunctionalparallelMountCmdspecific-port1119829249/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.36s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-717497 /tmp/TestFunctionalparallelMountCmdVerifyCleanup617949852/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-717497 /tmp/TestFunctionalparallelMountCmdVerifyCleanup617949852/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-717497 /tmp/TestFunctionalparallelMountCmdVerifyCleanup617949852/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-717497 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Done: out/minikube-linux-arm64 -p functional-717497 ssh "findmnt -T" /mount1: (1.039485872s)
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-717497 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-717497 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-717497 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-717497 /tmp/TestFunctionalparallelMountCmdVerifyCleanup617949852/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-717497 /tmp/TestFunctionalparallelMountCmdVerifyCleanup617949852/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-717497 /tmp/TestFunctionalparallelMountCmdVerifyCleanup617949852/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-717497 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-717497 version -o=json --components
functional_test.go:2275: (dbg) Done: out/minikube-linux-arm64 -p functional-717497 version -o=json --components: (1.399689174s)
--- PASS: TestFunctional/parallel/Version/components (1.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-717497 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-717497 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.2
registry.k8s.io/kube-proxy:v1.34.2
registry.k8s.io/kube-controller-manager:v1.34.2
registry.k8s.io/kube-apiserver:v1.34.2
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.12.1
public.ecr.aws/nginx/nginx:alpine
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/minikube-local-cache-test:functional-717497
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:functional-717497
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-717497 image ls --format short --alsologtostderr:
I1209 04:18:22.375234 1180865 out.go:360] Setting OutFile to fd 1 ...
I1209 04:18:22.377445 1180865 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1209 04:18:22.377492 1180865 out.go:374] Setting ErrFile to fd 2...
I1209 04:18:22.377514 1180865 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1209 04:18:22.377801 1180865 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-1142328/.minikube/bin
I1209 04:18:22.379071 1180865 config.go:182] Loaded profile config "functional-717497": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1209 04:18:22.379250 1180865 config.go:182] Loaded profile config "functional-717497": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1209 04:18:22.379866 1180865 cli_runner.go:164] Run: docker container inspect functional-717497 --format={{.State.Status}}
I1209 04:18:22.414224 1180865 ssh_runner.go:195] Run: systemctl --version
I1209 04:18:22.414307 1180865 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-717497
I1209 04:18:22.441878 1180865 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33895 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/functional-717497/id_rsa Username:docker}
I1209 04:18:22.553873 1180865 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-717497 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-717497 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                    IMAGE                    │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ gcr.io/k8s-minikube/busybox                 │ 1.28.4-glibc       │ sha256:1611cd │ 1.94MB │
│ gcr.io/k8s-minikube/storage-provisioner     │ v5                 │ sha256:ba04bb │ 8.03MB │
│ registry.k8s.io/kube-controller-manager     │ v1.34.2            │ sha256:1b3491 │ 20.7MB │
│ registry.k8s.io/pause                       │ latest             │ sha256:8cb209 │ 71.3kB │
│ registry.k8s.io/kube-apiserver              │ v1.34.2            │ sha256:b178af │ 24.6MB │
│ registry.k8s.io/kube-proxy                  │ v1.34.2            │ sha256:94bff1 │ 22.8MB │
│ registry.k8s.io/kube-scheduler              │ v1.34.2            │ sha256:4f982e │ 15.8MB │
│ docker.io/kicbase/echo-server               │ functional-717497  │ sha256:ce2d2c │ 2.17MB │
│ public.ecr.aws/nginx/nginx                  │ alpine             │ sha256:cbad63 │ 23.1MB │
│ registry.k8s.io/coredns/coredns             │ v1.12.1            │ sha256:138784 │ 20.4MB │
│ registry.k8s.io/pause                       │ 3.1                │ sha256:8057e0 │ 262kB  │
│ registry.k8s.io/pause                       │ 3.10.1             │ sha256:d7b100 │ 268kB  │
│ registry.k8s.io/pause                       │ 3.3                │ sha256:3d1873 │ 249kB  │
│ docker.io/library/minikube-local-cache-test │ functional-717497  │ sha256:f396cc │ 992B   │
│ registry.k8s.io/etcd                        │ 3.6.5-0            │ sha256:2c5f0d │ 21.1MB │
│ docker.io/kindest/kindnetd                  │ v20250512-df8de77b │ sha256:b1a8c6 │ 40.6MB │
└─────────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-717497 image ls --format table --alsologtostderr:
I1209 04:18:23.068137 1181091 out.go:360] Setting OutFile to fd 1 ...
I1209 04:18:23.068308 1181091 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1209 04:18:23.068320 1181091 out.go:374] Setting ErrFile to fd 2...
I1209 04:18:23.068343 1181091 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1209 04:18:23.068683 1181091 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-1142328/.minikube/bin
I1209 04:18:23.069352 1181091 config.go:182] Loaded profile config "functional-717497": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1209 04:18:23.069480 1181091 config.go:182] Loaded profile config "functional-717497": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1209 04:18:23.070078 1181091 cli_runner.go:164] Run: docker container inspect functional-717497 --format={{.State.Status}}
I1209 04:18:23.086978 1181091 ssh_runner.go:195] Run: systemctl --version
I1209 04:18:23.087040 1181091 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-717497
I1209 04:18:23.108501 1181091 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33895 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/functional-717497/id_rsa Username:docker}
I1209 04:18:23.220923 1181091 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-717497 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-717497 image ls --format json --alsologtostderr:
[{"id":"sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"74084559"},{"id":"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-717497"],"size":"2173567"},{"id":"sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"8034419"},{"id":"sha256:b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7","repoDigests":["registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.2"],"size":"24559643"},{"id":"sha256:1b34917560f0916ad0d1e98debe
af98c640b68c5a38f6d87711f0e288e5d7be2","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.2"],"size":"20718696"},{"id":"sha256:cbad6347cca28a6ee7b08793856bc6fcb2c2c7a377a62a5e6d785895c4194ac1","repoDigests":["public.ecr.aws/nginx/nginx@sha256:b7198452993fe37c15651e967713dd500eb4367f80a2d63c3bb5b172e46fc3b5"],"repoTags":["public.ecr.aws/nginx/nginx:alpine"],"size":"23107444"},{"id":"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"20392204"},{"id":"sha256:94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786","repoDigests":["registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5"],"repoTags":["registry.k8s.io/kube-p
roxy:v1.34.2"],"size":"22802260"},{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"262191"},{"id":"sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"249461"},{"id":"sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"40636774"},{"id":"sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"18306114"},{"id":"sha256:f396cc1d2a2f792c8359c58d4cd23fe6d949d3fd4d68a61961f5310e98abe14b","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functio
nal-717497"],"size":"992"},{"id":"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"1935750"},{"id":"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42","repoDigests":["registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"21136588"},{"id":"sha256:4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949","repoDigests":["registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.2"],"size":"15775785"},{"id":"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"],"repo
Tags":["registry.k8s.io/pause:3.10.1"],"size":"267939"},{"id":"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"71300"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-717497 image ls --format json --alsologtostderr:
I1209 04:18:22.795647 1181009 out.go:360] Setting OutFile to fd 1 ...
I1209 04:18:22.795890 1181009 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1209 04:18:22.795917 1181009 out.go:374] Setting ErrFile to fd 2...
I1209 04:18:22.795937 1181009 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1209 04:18:22.796249 1181009 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-1142328/.minikube/bin
I1209 04:18:22.797036 1181009 config.go:182] Loaded profile config "functional-717497": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1209 04:18:22.797356 1181009 config.go:182] Loaded profile config "functional-717497": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1209 04:18:22.798093 1181009 cli_runner.go:164] Run: docker container inspect functional-717497 --format={{.State.Status}}
I1209 04:18:22.819994 1181009 ssh_runner.go:195] Run: systemctl --version
I1209 04:18:22.820398 1181009 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-717497
I1209 04:18:22.852433 1181009 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33895 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/functional-717497/id_rsa Username:docker}
I1209 04:18:22.958500 1181009 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-717497 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-717497 image ls --format yaml --alsologtostderr:
- id: sha256:cbad6347cca28a6ee7b08793856bc6fcb2c2c7a377a62a5e6d785895c4194ac1
repoDigests:
- public.ecr.aws/nginx/nginx@sha256:b7198452993fe37c15651e967713dd500eb4367f80a2d63c3bb5b172e46fc3b5
repoTags:
- public.ecr.aws/nginx/nginx:alpine
size: "23107444"
- id: sha256:1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.2
size: "20718696"
- id: sha256:4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.2
size: "15775785"
- id: sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-717497
size: "2173567"
- id: sha256:94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786
repoDigests:
- registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5
repoTags:
- registry.k8s.io/kube-proxy:v1.34.2
size: "22802260"
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"
- id: sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
repoTags:
- registry.k8s.io/pause:3.10.1
size: "267939"
- id: sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "74084559"
- id: sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "40636774"
- id: sha256:f396cc1d2a2f792c8359c58d4cd23fe6d949d3fd4d68a61961f5310e98abe14b
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-717497
size: "992"
- id: sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "1935750"
- id: sha256:b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.2
size: "24559643"
- id: sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "18306114"
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"
- id: sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "20392204"
- id: sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "21136588"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-717497 image ls --format yaml --alsologtostderr:
I1209 04:18:22.500366 1180922 out.go:360] Setting OutFile to fd 1 ...
I1209 04:18:22.500738 1180922 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1209 04:18:22.500754 1180922 out.go:374] Setting ErrFile to fd 2...
I1209 04:18:22.500760 1180922 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1209 04:18:22.501029 1180922 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-1142328/.minikube/bin
I1209 04:18:22.501671 1180922 config.go:182] Loaded profile config "functional-717497": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1209 04:18:22.501800 1180922 config.go:182] Loaded profile config "functional-717497": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1209 04:18:22.502306 1180922 cli_runner.go:164] Run: docker container inspect functional-717497 --format={{.State.Status}}
I1209 04:18:22.521277 1180922 ssh_runner.go:195] Run: systemctl --version
I1209 04:18:22.521360 1180922 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-717497
I1209 04:18:22.551929 1180922 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33895 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/functional-717497/id_rsa Username:docker}
I1209 04:18:22.664416 1180922 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-717497 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-717497 ssh pgrep buildkitd: exit status 1 (319.295462ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-717497 image build -t localhost/my-image:functional-717497 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-717497 image build -t localhost/my-image:functional-717497 testdata/build --alsologtostderr: (3.302895718s)
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-717497 image build -t localhost/my-image:functional-717497 testdata/build --alsologtostderr:
I1209 04:18:22.963963 1181068 out.go:360] Setting OutFile to fd 1 ...
I1209 04:18:22.967775 1181068 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1209 04:18:22.967829 1181068 out.go:374] Setting ErrFile to fd 2...
I1209 04:18:22.967851 1181068 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1209 04:18:22.968268 1181068 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-1142328/.minikube/bin
I1209 04:18:22.969243 1181068 config.go:182] Loaded profile config "functional-717497": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1209 04:18:22.976769 1181068 config.go:182] Loaded profile config "functional-717497": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1209 04:18:22.977429 1181068 cli_runner.go:164] Run: docker container inspect functional-717497 --format={{.State.Status}}
I1209 04:18:23.006147 1181068 ssh_runner.go:195] Run: systemctl --version
I1209 04:18:23.006229 1181068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-717497
I1209 04:18:23.037650 1181068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33895 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/functional-717497/id_rsa Username:docker}
I1209 04:18:23.146895 1181068 build_images.go:162] Building image from path: /tmp/build.2637226983.tar
I1209 04:18:23.146979 1181068 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1209 04:18:23.157732 1181068 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2637226983.tar
I1209 04:18:23.163466 1181068 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2637226983.tar: stat -c "%s %y" /var/lib/minikube/build/build.2637226983.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2637226983.tar': No such file or directory
I1209 04:18:23.163505 1181068 ssh_runner.go:362] scp /tmp/build.2637226983.tar --> /var/lib/minikube/build/build.2637226983.tar (3072 bytes)
I1209 04:18:23.184082 1181068 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2637226983
I1209 04:18:23.192563 1181068 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2637226983 -xf /var/lib/minikube/build/build.2637226983.tar
I1209 04:18:23.201395 1181068 containerd.go:394] Building image: /var/lib/minikube/build/build.2637226983
I1209 04:18:23.201479 1181068 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.2637226983 --local dockerfile=/var/lib/minikube/build/build.2637226983 --output type=image,name=localhost/my-image:functional-717497
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.3s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.2s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.4s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.5s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.6s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:71cfbe6db64c1099cedff6ac49683f4a397d259fedd325af219a5b1063fb7786 0.0s done
#8 exporting config sha256:3234d7197f5ca39cc635850e8cdc7d46707c20ad297875be7173c07a1307e379 0.0s done
#8 naming to localhost/my-image:functional-717497 done
#8 DONE 0.2s
I1209 04:18:26.192545 1181068 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.2637226983 --local dockerfile=/var/lib/minikube/build/build.2637226983 --output type=image,name=localhost/my-image:functional-717497: (2.991036173s)
I1209 04:18:26.192629 1181068 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2637226983
I1209 04:18:26.200714 1181068 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2637226983.tar
I1209 04:18:26.208749 1181068 build_images.go:218] Built localhost/my-image:functional-717497 from /tmp/build.2637226983.tar
I1209 04:18:26.208780 1181068 build_images.go:134] succeeded building to: functional-717497
I1209 04:18:26.208785 1181068 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-717497 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.86s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-717497
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-717497 image load --daemon kicbase/echo-server:functional-717497 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-717497 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-717497 image load --daemon kicbase/echo-server:functional-717497 --alsologtostderr
2025/12/09 04:18:17 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-717497 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-717497
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-717497 image load --daemon kicbase/echo-server:functional-717497 --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-linux-arm64 -p functional-717497 image load --daemon kicbase/echo-server:functional-717497 --alsologtostderr: (1.033102408s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-717497 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.61s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-717497 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-717497 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-717497 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-717497 image save kicbase/echo-server:functional-717497 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-717497 image rm kicbase/echo-server:functional-717497 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-717497 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-717497 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-717497 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-717497
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-717497 image save --daemon kicbase/echo-server:functional-717497 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect kicbase/echo-server:functional-717497
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.50s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-717497
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-717497
--- PASS: TestFunctional/delete_my-image_image (0.03s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-717497
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/test/nested/copy/1144231/hosts
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (3.43s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-667319 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-667319 cache add registry.k8s.io/pause:3.1: (1.257591864s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-667319 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-667319 cache add registry.k8s.io/pause:3.3: (1.099556991s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-667319 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-667319 cache add registry.k8s.io/pause:latest: (1.071300662s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (3.43s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (1.04s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-667319 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialCach3682355748/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-667319 cache add minikube-local-cache-test:functional-667319
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-667319 cache delete minikube-local-cache-test:functional-667319
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-667319
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (1.04s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-667319 ssh sudo crictl images
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.9s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-667319 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-667319 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-667319 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (287.125406ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-667319 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-667319 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.90s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (1.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-667319 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-arm64 -p functional-667319 logs: (1.010023348s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (1.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (0.96s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-667319 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs4092212575/001/logs.txt
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (0.96s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.46s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-667319 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-667319 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-667319 config get cpus: exit status 14 (53.015961ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-667319 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-667319 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-667319 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-667319 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-667319 config get cpus: exit status 14 (66.885628ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.46s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.4s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-667319 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-667319 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: exit status 23 (172.455403ms)

                                                
                                                
-- stdout --
	* [functional-667319] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22081
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22081-1142328/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-1142328/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 04:47:31.641908 1211295 out.go:360] Setting OutFile to fd 1 ...
	I1209 04:47:31.642025 1211295 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 04:47:31.642036 1211295 out.go:374] Setting ErrFile to fd 2...
	I1209 04:47:31.642043 1211295 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 04:47:31.642286 1211295 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-1142328/.minikube/bin
	I1209 04:47:31.642671 1211295 out.go:368] Setting JSON to false
	I1209 04:47:31.643536 1211295 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":26975,"bootTime":1765228677,"procs":158,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1209 04:47:31.643607 1211295 start.go:143] virtualization:  
	I1209 04:47:31.646923 1211295 out.go:179] * [functional-667319] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1209 04:47:31.650706 1211295 out.go:179]   - MINIKUBE_LOCATION=22081
	I1209 04:47:31.650838 1211295 notify.go:221] Checking for updates...
	I1209 04:47:31.656387 1211295 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 04:47:31.659375 1211295 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22081-1142328/kubeconfig
	I1209 04:47:31.662274 1211295 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-1142328/.minikube
	I1209 04:47:31.665100 1211295 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1209 04:47:31.668144 1211295 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 04:47:31.671519 1211295 config.go:182] Loaded profile config "functional-667319": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1209 04:47:31.672288 1211295 driver.go:422] Setting default libvirt URI to qemu:///system
	I1209 04:47:31.692288 1211295 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1209 04:47:31.692406 1211295 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 04:47:31.748128 1211295 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-09 04:47:31.739338465 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1209 04:47:31.748230 1211295 docker.go:319] overlay module found
	I1209 04:47:31.751479 1211295 out.go:179] * Using the docker driver based on existing profile
	I1209 04:47:31.754339 1211295 start.go:309] selected driver: docker
	I1209 04:47:31.754360 1211295 start.go:927] validating driver "docker" against &{Name:functional-667319 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-667319 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 04:47:31.754475 1211295 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 04:47:31.757921 1211295 out.go:203] 
	W1209 04:47:31.760742 1211295 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1209 04:47:31.763508 1211295 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-667319 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.40s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.21s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-667319 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-667319 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: exit status 23 (208.009825ms)

                                                
                                                
-- stdout --
	* [functional-667319] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22081
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22081-1142328/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-1142328/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 04:47:34.321713 1211929 out.go:360] Setting OutFile to fd 1 ...
	I1209 04:47:34.321929 1211929 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 04:47:34.321959 1211929 out.go:374] Setting ErrFile to fd 2...
	I1209 04:47:34.321979 1211929 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 04:47:34.322391 1211929 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-1142328/.minikube/bin
	I1209 04:47:34.322830 1211929 out.go:368] Setting JSON to false
	I1209 04:47:34.323745 1211929 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":26978,"bootTime":1765228677,"procs":158,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1209 04:47:34.323845 1211929 start.go:143] virtualization:  
	I1209 04:47:34.327171 1211929 out.go:179] * [functional-667319] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1209 04:47:34.331051 1211929 notify.go:221] Checking for updates...
	I1209 04:47:34.331388 1211929 out.go:179]   - MINIKUBE_LOCATION=22081
	I1209 04:47:34.334585 1211929 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 04:47:34.337483 1211929 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22081-1142328/kubeconfig
	I1209 04:47:34.340345 1211929 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-1142328/.minikube
	I1209 04:47:34.343178 1211929 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1209 04:47:34.346040 1211929 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 04:47:34.349423 1211929 config.go:182] Loaded profile config "functional-667319": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1209 04:47:34.349977 1211929 driver.go:422] Setting default libvirt URI to qemu:///system
	I1209 04:47:34.375111 1211929 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1209 04:47:34.375225 1211929 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 04:47:34.453332 1211929 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-09 04:47:34.434527916 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1209 04:47:34.453454 1211929 docker.go:319] overlay module found
	I1209 04:47:34.456344 1211929 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1209 04:47:34.459148 1211929 start.go:309] selected driver: docker
	I1209 04:47:34.459167 1211929 start.go:927] validating driver "docker" against &{Name:functional-667319 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-667319 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 04:47:34.459255 1211929 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 04:47:34.462988 1211929 out.go:203] 
	W1209 04:47:34.465967 1211929 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1209 04:47:34.469187 1211929 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.21s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-667319 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-667319 addons list -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.62s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-667319 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-667319 ssh "cat /etc/hostname"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.62s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (1.59s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-667319 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-667319 ssh -n functional-667319 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-667319 cp functional-667319:/home/docker/cp-test.txt /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelCp2135257691/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-667319 ssh -n functional-667319 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-667319 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-667319 ssh -n functional-667319 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (1.59s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.28s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/1144231/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-667319 ssh "sudo cat /etc/test/nested/copy/1144231/hosts"
I1209 04:45:42.835648 1144231 retry.go:31] will retry after 1.726943644s: Temporary Error: Get "http://10.98.178.96": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.28s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (1.73s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/1144231.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-667319 ssh "sudo cat /etc/ssl/certs/1144231.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/1144231.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-667319 ssh "sudo cat /usr/share/ca-certificates/1144231.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-667319 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/11442312.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-667319 ssh "sudo cat /etc/ssl/certs/11442312.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/11442312.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-667319 ssh "sudo cat /usr/share/ca-certificates/11442312.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-667319 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (1.73s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.85s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-667319 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-667319 ssh "sudo systemctl is-active docker": exit status 1 (426.644635ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-667319 ssh "sudo systemctl is-active crio"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-667319 ssh "sudo systemctl is-active crio": exit status 1 (421.270909ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.85s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.3s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.30s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-667319 version --short
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.51s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-667319 version -o=json --components
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.51s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-667319 tunnel --alsologtostderr]
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-667319 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-667319 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.35.0-beta.0
registry.k8s.io/kube-proxy:v1.35.0-beta.0
registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
registry.k8s.io/kube-apiserver:v1.35.0-beta.0
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.13.1
gcr.io/k8s-minikube/storage-provisioner:v5
docker.io/library/minikube-local-cache-test:functional-667319
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:functional-667319
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-667319 image ls --format short --alsologtostderr:
I1209 04:47:36.842567 1212458 out.go:360] Setting OutFile to fd 1 ...
I1209 04:47:36.842734 1212458 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1209 04:47:36.842743 1212458 out.go:374] Setting ErrFile to fd 2...
I1209 04:47:36.842749 1212458 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1209 04:47:36.842996 1212458 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-1142328/.minikube/bin
I1209 04:47:36.843633 1212458 config.go:182] Loaded profile config "functional-667319": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1209 04:47:36.843751 1212458 config.go:182] Loaded profile config "functional-667319": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1209 04:47:36.844266 1212458 cli_runner.go:164] Run: docker container inspect functional-667319 --format={{.State.Status}}
I1209 04:47:36.861375 1212458 ssh_runner.go:195] Run: systemctl --version
I1209 04:47:36.861437 1212458 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-667319
I1209 04:47:36.878453 1212458 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/functional-667319/id_rsa Username:docker}
I1209 04:47:36.986318 1212458 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-667319 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-667319 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                    IMAGE                    │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ docker.io/kicbase/echo-server               │ functional-667319  │ sha256:ce2d2c │ 2.17MB │
│ registry.k8s.io/kube-proxy                  │ v1.35.0-beta.0     │ sha256:404c2e │ 22.4MB │
│ registry.k8s.io/pause                       │ 3.3                │ sha256:3d1873 │ 249kB  │
│ registry.k8s.io/pause                       │ 3.1                │ sha256:8057e0 │ 262kB  │
│ docker.io/kindest/kindnetd                  │ v20250512-df8de77b │ sha256:b1a8c6 │ 40.6MB │
│ docker.io/library/minikube-local-cache-test │ functional-667319  │ sha256:f396cc │ 992B   │
│ registry.k8s.io/etcd                        │ 3.6.5-0            │ sha256:2c5f0d │ 21.1MB │
│ registry.k8s.io/kube-controller-manager     │ v1.35.0-beta.0     │ sha256:68b5f7 │ 20.7MB │
│ registry.k8s.io/kube-scheduler              │ v1.35.0-beta.0     │ sha256:163787 │ 15.4MB │
│ gcr.io/k8s-minikube/storage-provisioner     │ v5                 │ sha256:ba04bb │ 8.03MB │
│ registry.k8s.io/pause                       │ 3.10.1             │ sha256:d7b100 │ 268kB  │
│ localhost/my-image                          │ functional-667319  │ sha256:8d19d3 │ 831kB  │
│ registry.k8s.io/coredns/coredns             │ v1.13.1            │ sha256:e08f4d │ 21.2MB │
│ registry.k8s.io/kube-apiserver              │ v1.35.0-beta.0     │ sha256:ccd634 │ 24.7MB │
│ registry.k8s.io/pause                       │ latest             │ sha256:8cb209 │ 71.3kB │
└─────────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-667319 image ls --format table --alsologtostderr:
I1209 04:47:41.149117 1212847 out.go:360] Setting OutFile to fd 1 ...
I1209 04:47:41.149330 1212847 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1209 04:47:41.149360 1212847 out.go:374] Setting ErrFile to fd 2...
I1209 04:47:41.149380 1212847 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1209 04:47:41.149719 1212847 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-1142328/.minikube/bin
I1209 04:47:41.150503 1212847 config.go:182] Loaded profile config "functional-667319": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1209 04:47:41.150691 1212847 config.go:182] Loaded profile config "functional-667319": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1209 04:47:41.151355 1212847 cli_runner.go:164] Run: docker container inspect functional-667319 --format={{.State.Status}}
I1209 04:47:41.168376 1212847 ssh_runner.go:195] Run: systemctl --version
I1209 04:47:41.168428 1212847 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-667319
I1209 04:47:41.203325 1212847 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/functional-667319/id_rsa Username:docker}
I1209 04:47:41.307538 1212847 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-667319 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-667319 image ls --format json --alsologtostderr:
[{"id":"sha256:8d19d3a32a56b2fa24160dc46919201cf0064b6c0d0fc7f41ab4eafa4fa50f4b","repoDigests":[],"repoTags":["localhost/my-image:functional-667319"],"size":"830617"},{"id":"sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904","repoDigests":["registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"],"repoTags":["registry.k8s.io/kube-proxy:v1.35.0-beta.0"],"size":"22429671"},{"id":"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"267939"},{"id":"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-667319"],"size":"2173567"},{"id":"sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0a
e606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"40636774"},{"id":"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"],"repoTags":["registry.k8s.io/coredns/coredns:v1.13.1"],"size":"21168808"},{"id":"sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"],"size":"20661043"},{"id":"sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b","repoDigests":["registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6"],"repoTags":["registry.k8s.io/kube-scheduler:v1.35.0-beta.0"],"size":"15391364"},{"id":"sha256:2c5f0dedd21c25ec3a6709
934d22152d53ec50fe57b72d29e4450655e3d14d42","repoDigests":["registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"21136588"},{"id":"sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"249461"},{"id":"sha256:f396cc1d2a2f792c8359c58d4cd23fe6d949d3fd4d68a61961f5310e98abe14b","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-667319"],"size":"992"},{"id":"sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4","repoDigests":["registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58"],"repoTags":["registry.k8s.io/kube-apiserver:v1.35.0-beta.0"],"size":"24678359"},{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"262191"},{"id":"sha256:8cb2091f603e7
5187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"71300"},{"id":"sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"8034419"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-667319 image ls --format json --alsologtostderr:
I1209 04:47:40.916138 1212810 out.go:360] Setting OutFile to fd 1 ...
I1209 04:47:40.916321 1212810 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1209 04:47:40.916334 1212810 out.go:374] Setting ErrFile to fd 2...
I1209 04:47:40.916340 1212810 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1209 04:47:40.916620 1212810 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-1142328/.minikube/bin
I1209 04:47:40.917278 1212810 config.go:182] Loaded profile config "functional-667319": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1209 04:47:40.917441 1212810 config.go:182] Loaded profile config "functional-667319": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1209 04:47:40.918013 1212810 cli_runner.go:164] Run: docker container inspect functional-667319 --format={{.State.Status}}
I1209 04:47:40.934940 1212810 ssh_runner.go:195] Run: systemctl --version
I1209 04:47:40.934996 1212810 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-667319
I1209 04:47:40.951939 1212810 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/functional-667319/id_rsa Username:docker}
I1209 04:47:41.054412 1212810 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-667319 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-667319 image ls --format yaml --alsologtostderr:
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"
- id: sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "21136588"
- id: sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904
repoDigests:
- registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a
repoTags:
- registry.k8s.io/kube-proxy:v1.35.0-beta.0
size: "22429671"
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"
- id: sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "40636774"
- id: sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6
repoTags:
- registry.k8s.io/coredns/coredns:v1.13.1
size: "21168808"
- id: sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58
repoTags:
- registry.k8s.io/kube-apiserver:v1.35.0-beta.0
size: "24678359"
- id: sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d
repoTags:
- registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
size: "20661043"
- id: sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6
repoTags:
- registry.k8s.io/kube-scheduler:v1.35.0-beta.0
size: "15391364"
- id: sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
repoTags:
- registry.k8s.io/pause:3.10.1
size: "267939"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"
- id: sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-667319
size: "2173567"
- id: sha256:f396cc1d2a2f792c8359c58d4cd23fe6d949d3fd4d68a61961f5310e98abe14b
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-667319
size: "992"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-667319 image ls --format yaml --alsologtostderr:
I1209 04:47:37.076522 1212495 out.go:360] Setting OutFile to fd 1 ...
I1209 04:47:37.076686 1212495 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1209 04:47:37.076698 1212495 out.go:374] Setting ErrFile to fd 2...
I1209 04:47:37.076704 1212495 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1209 04:47:37.076982 1212495 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-1142328/.minikube/bin
I1209 04:47:37.077614 1212495 config.go:182] Loaded profile config "functional-667319": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1209 04:47:37.077803 1212495 config.go:182] Loaded profile config "functional-667319": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1209 04:47:37.078366 1212495 cli_runner.go:164] Run: docker container inspect functional-667319 --format={{.State.Status}}
I1209 04:47:37.095539 1212495 ssh_runner.go:195] Run: systemctl --version
I1209 04:47:37.095592 1212495 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-667319
I1209 04:47:37.112643 1212495 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/functional-667319/id_rsa Username:docker}
I1209 04:47:37.214379 1212495 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (3.62s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-667319 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-667319 ssh pgrep buildkitd: exit status 1 (283.559882ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-667319 image build -t localhost/my-image:functional-667319 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-667319 image build -t localhost/my-image:functional-667319 testdata/build --alsologtostderr: (3.109850949s)
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-667319 image build -t localhost/my-image:functional-667319 testdata/build --alsologtostderr:
I1209 04:47:37.585926 1212597 out.go:360] Setting OutFile to fd 1 ...
I1209 04:47:37.586140 1212597 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1209 04:47:37.586170 1212597 out.go:374] Setting ErrFile to fd 2...
I1209 04:47:37.586191 1212597 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1209 04:47:37.586454 1212597 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-1142328/.minikube/bin
I1209 04:47:37.587117 1212597 config.go:182] Loaded profile config "functional-667319": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1209 04:47:37.587765 1212597 config.go:182] Loaded profile config "functional-667319": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1209 04:47:37.588361 1212597 cli_runner.go:164] Run: docker container inspect functional-667319 --format={{.State.Status}}
I1209 04:47:37.604770 1212597 ssh_runner.go:195] Run: systemctl --version
I1209 04:47:37.604815 1212597 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-667319
I1209 04:47:37.624079 1212597 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/functional-667319/id_rsa Username:docker}
I1209 04:47:37.726389 1212597 build_images.go:162] Building image from path: /tmp/build.2103764746.tar
I1209 04:47:37.726487 1212597 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1209 04:47:37.734175 1212597 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2103764746.tar
I1209 04:47:37.737803 1212597 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2103764746.tar: stat -c "%s %y" /var/lib/minikube/build/build.2103764746.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2103764746.tar': No such file or directory
I1209 04:47:37.737834 1212597 ssh_runner.go:362] scp /tmp/build.2103764746.tar --> /var/lib/minikube/build/build.2103764746.tar (3072 bytes)
I1209 04:47:37.754956 1212597 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2103764746
I1209 04:47:37.762429 1212597 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2103764746 -xf /var/lib/minikube/build/build.2103764746.tar
I1209 04:47:37.770060 1212597 containerd.go:394] Building image: /var/lib/minikube/build/build.2103764746
I1209 04:47:37.770144 1212597 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.2103764746 --local dockerfile=/var/lib/minikube/build/build.2103764746 --output type=image,name=localhost/my-image:functional-667319
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.7s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.2s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.4s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.5s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.1s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:d46033f6125f8990e78340a5c30f1f7344842b22a8d6a7e5fe551e0716631074
#8 exporting manifest sha256:d46033f6125f8990e78340a5c30f1f7344842b22a8d6a7e5fe551e0716631074 0.0s done
#8 exporting config sha256:8d19d3a32a56b2fa24160dc46919201cf0064b6c0d0fc7f41ab4eafa4fa50f4b 0.0s done
#8 naming to localhost/my-image:functional-667319 done
#8 DONE 0.2s
I1209 04:47:40.619869 1212597 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.2103764746 --local dockerfile=/var/lib/minikube/build/build.2103764746 --output type=image,name=localhost/my-image:functional-667319: (2.849696346s)
I1209 04:47:40.619965 1212597 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2103764746
I1209 04:47:40.627809 1212597 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2103764746.tar
I1209 04:47:40.635091 1212597 build_images.go:218] Built localhost/my-image:functional-667319 from /tmp/build.2103764746.tar
I1209 04:47:40.635121 1212597 build_images.go:134] succeeded building to: functional-667319
I1209 04:47:40.635127 1212597 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-667319 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (3.62s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.25s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-667319
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.25s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (1.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-667319 image load --daemon kicbase/echo-server:functional-667319 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-667319 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (1.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (1.08s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-667319 image load --daemon kicbase/echo-server:functional-667319 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-667319 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (1.08s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (1.32s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-667319
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-667319 image load --daemon kicbase/echo-server:functional-667319 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-667319 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (1.32s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.32s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-667319 image save kicbase/echo-server:functional-667319 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.32s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.47s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-667319 image rm kicbase/echo-server:functional-667319 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-667319 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.47s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (0.69s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-667319 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-667319 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (0.69s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.36s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-667319
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-667319 image save --daemon kicbase/echo-server:functional-667319 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect kicbase/echo-server:functional-667319
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.36s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-667319 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-667319 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-667319 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (1.83s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-667319 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo4027315530/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-667319 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-667319 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (324.119253ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1209 04:45:47.894909 1144231 retry.go:31] will retry after 461.471651ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-667319 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-667319 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-667319 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo4027315530/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-667319 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-667319 ssh "sudo umount -f /mount-9p": exit status 1 (262.724143ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-667319 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-667319 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo4027315530/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (1.83s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (2.2s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-667319 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3101865674/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-667319 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3101865674/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-667319 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3101865674/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-667319 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-667319 ssh "findmnt -T" /mount1: exit status 1 (622.558928ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1209 04:45:50.027426 1144231 retry.go:31] will retry after 682.155005ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-667319 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-667319 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-667319 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-667319 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-667319 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3101865674/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-667319 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3101865674/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-667319 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3101865674/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (2.20s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.1s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-667319 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: exit status 103
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.10s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.4s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.40s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.38s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "316.812106ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "61.115179ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.38s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "356.433343ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "54.174475ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-667319
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-667319
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-667319
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (214.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 -p ha-853930 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd
E1209 04:50:32.751091 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 04:50:32.757581 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 04:50:32.768907 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 04:50:32.791102 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 04:50:32.832406 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 04:50:32.913763 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 04:50:33.075212 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 04:50:33.396970 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 04:50:34.038885 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 04:50:35.320173 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 04:50:37.882315 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 04:50:43.004297 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 04:50:53.245673 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 04:51:06.732151 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/addons-221952/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 04:51:13.727145 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 04:51:54.689341 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 04:52:38.986002 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-717497/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 04:53:16.611476 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 -p ha-853930 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd: (3m33.974990507s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-853930 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (214.89s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 -p ha-853930 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 -p ha-853930 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 -p ha-853930 kubectl -- rollout status deployment/busybox: (4.216563027s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-853930 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 -p ha-853930 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-853930 kubectl -- exec busybox-7b57f96db7-gc6dq -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-853930 kubectl -- exec busybox-7b57f96db7-m5k9c -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-853930 kubectl -- exec busybox-7b57f96db7-w9pbr -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-853930 kubectl -- exec busybox-7b57f96db7-gc6dq -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-853930 kubectl -- exec busybox-7b57f96db7-m5k9c -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-853930 kubectl -- exec busybox-7b57f96db7-w9pbr -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-853930 kubectl -- exec busybox-7b57f96db7-gc6dq -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-853930 kubectl -- exec busybox-7b57f96db7-m5k9c -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-853930 kubectl -- exec busybox-7b57f96db7-w9pbr -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 -p ha-853930 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-853930 kubectl -- exec busybox-7b57f96db7-gc6dq -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-853930 kubectl -- exec busybox-7b57f96db7-gc6dq -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-853930 kubectl -- exec busybox-7b57f96db7-m5k9c -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-853930 kubectl -- exec busybox-7b57f96db7-m5k9c -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-853930 kubectl -- exec busybox-7b57f96db7-w9pbr -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-853930 kubectl -- exec busybox-7b57f96db7-w9pbr -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (27.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 -p ha-853930 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 -p ha-853930 node add --alsologtostderr -v 5: (26.74147789s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-853930 status --alsologtostderr -v 5
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-853930 status --alsologtostderr -v 5: (1.075293106s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (27.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-853930 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.06108625s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (19.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-853930 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-853930 status --output json --alsologtostderr -v 5: (1.012762958s)
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-853930 cp testdata/cp-test.txt ha-853930:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-853930 ssh -n ha-853930 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-853930 cp ha-853930:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile902219338/001/cp-test_ha-853930.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-853930 ssh -n ha-853930 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-853930 cp ha-853930:/home/docker/cp-test.txt ha-853930-m02:/home/docker/cp-test_ha-853930_ha-853930-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-853930 ssh -n ha-853930 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-853930 ssh -n ha-853930-m02 "sudo cat /home/docker/cp-test_ha-853930_ha-853930-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-853930 cp ha-853930:/home/docker/cp-test.txt ha-853930-m03:/home/docker/cp-test_ha-853930_ha-853930-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-853930 ssh -n ha-853930 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-853930 ssh -n ha-853930-m03 "sudo cat /home/docker/cp-test_ha-853930_ha-853930-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-853930 cp ha-853930:/home/docker/cp-test.txt ha-853930-m04:/home/docker/cp-test_ha-853930_ha-853930-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-853930 ssh -n ha-853930 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-853930 ssh -n ha-853930-m04 "sudo cat /home/docker/cp-test_ha-853930_ha-853930-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-853930 cp testdata/cp-test.txt ha-853930-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-853930 ssh -n ha-853930-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-853930 cp ha-853930-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile902219338/001/cp-test_ha-853930-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-853930 ssh -n ha-853930-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-853930 cp ha-853930-m02:/home/docker/cp-test.txt ha-853930:/home/docker/cp-test_ha-853930-m02_ha-853930.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-853930 ssh -n ha-853930-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-853930 ssh -n ha-853930 "sudo cat /home/docker/cp-test_ha-853930-m02_ha-853930.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-853930 cp ha-853930-m02:/home/docker/cp-test.txt ha-853930-m03:/home/docker/cp-test_ha-853930-m02_ha-853930-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-853930 ssh -n ha-853930-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-853930 ssh -n ha-853930-m03 "sudo cat /home/docker/cp-test_ha-853930-m02_ha-853930-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-853930 cp ha-853930-m02:/home/docker/cp-test.txt ha-853930-m04:/home/docker/cp-test_ha-853930-m02_ha-853930-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-853930 ssh -n ha-853930-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-853930 ssh -n ha-853930-m04 "sudo cat /home/docker/cp-test_ha-853930-m02_ha-853930-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-853930 cp testdata/cp-test.txt ha-853930-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-853930 ssh -n ha-853930-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-853930 cp ha-853930-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile902219338/001/cp-test_ha-853930-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-853930 ssh -n ha-853930-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-853930 cp ha-853930-m03:/home/docker/cp-test.txt ha-853930:/home/docker/cp-test_ha-853930-m03_ha-853930.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-853930 ssh -n ha-853930-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-853930 ssh -n ha-853930 "sudo cat /home/docker/cp-test_ha-853930-m03_ha-853930.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-853930 cp ha-853930-m03:/home/docker/cp-test.txt ha-853930-m02:/home/docker/cp-test_ha-853930-m03_ha-853930-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-853930 ssh -n ha-853930-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-853930 ssh -n ha-853930-m02 "sudo cat /home/docker/cp-test_ha-853930-m03_ha-853930-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-853930 cp ha-853930-m03:/home/docker/cp-test.txt ha-853930-m04:/home/docker/cp-test_ha-853930-m03_ha-853930-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-853930 ssh -n ha-853930-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-853930 ssh -n ha-853930-m04 "sudo cat /home/docker/cp-test_ha-853930-m03_ha-853930-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-853930 cp testdata/cp-test.txt ha-853930-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-853930 ssh -n ha-853930-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-853930 cp ha-853930-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile902219338/001/cp-test_ha-853930-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-853930 ssh -n ha-853930-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-853930 cp ha-853930-m04:/home/docker/cp-test.txt ha-853930:/home/docker/cp-test_ha-853930-m04_ha-853930.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-853930 ssh -n ha-853930-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-853930 ssh -n ha-853930 "sudo cat /home/docker/cp-test_ha-853930-m04_ha-853930.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-853930 cp ha-853930-m04:/home/docker/cp-test.txt ha-853930-m02:/home/docker/cp-test_ha-853930-m04_ha-853930-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-853930 ssh -n ha-853930-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-853930 ssh -n ha-853930-m02 "sudo cat /home/docker/cp-test_ha-853930-m04_ha-853930-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-853930 cp ha-853930-m04:/home/docker/cp-test.txt ha-853930-m03:/home/docker/cp-test_ha-853930-m04_ha-853930-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-853930 ssh -n ha-853930-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-853930 ssh -n ha-853930-m03 "sudo cat /home/docker/cp-test_ha-853930-m04_ha-853930-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (19.89s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-853930 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-853930 node stop m02 --alsologtostderr -v 5: (12.068173051s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-853930 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-853930 status --alsologtostderr -v 5: exit status 7 (846.589781ms)

                                                
                                                
-- stdout --
	ha-853930
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-853930-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-853930-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-853930-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 04:54:39.864172 1230448 out.go:360] Setting OutFile to fd 1 ...
	I1209 04:54:39.864621 1230448 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 04:54:39.864638 1230448 out.go:374] Setting ErrFile to fd 2...
	I1209 04:54:39.864645 1230448 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 04:54:39.864929 1230448 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-1142328/.minikube/bin
	I1209 04:54:39.865144 1230448 out.go:368] Setting JSON to false
	I1209 04:54:39.865177 1230448 mustload.go:66] Loading cluster: ha-853930
	I1209 04:54:39.865284 1230448 notify.go:221] Checking for updates...
	I1209 04:54:39.865791 1230448 config.go:182] Loaded profile config "ha-853930": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1209 04:54:39.866125 1230448 status.go:174] checking status of ha-853930 ...
	I1209 04:54:39.866716 1230448 cli_runner.go:164] Run: docker container inspect ha-853930 --format={{.State.Status}}
	I1209 04:54:39.888609 1230448 status.go:371] ha-853930 host status = "Running" (err=<nil>)
	I1209 04:54:39.888662 1230448 host.go:66] Checking if "ha-853930" exists ...
	I1209 04:54:39.889325 1230448 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-853930
	I1209 04:54:39.923946 1230448 host.go:66] Checking if "ha-853930" exists ...
	I1209 04:54:39.924314 1230448 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1209 04:54:39.924363 1230448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-853930
	I1209 04:54:39.941891 1230448 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33905 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/ha-853930/id_rsa Username:docker}
	I1209 04:54:40.066743 1230448 ssh_runner.go:195] Run: systemctl --version
	I1209 04:54:40.078563 1230448 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 04:54:40.094493 1230448 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 04:54:40.183723 1230448 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:62 OomKillDisable:true NGoroutines:72 SystemTime:2025-12-09 04:54:40.17342642 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1209 04:54:40.184401 1230448 kubeconfig.go:125] found "ha-853930" server: "https://192.168.49.254:8443"
	I1209 04:54:40.184443 1230448 api_server.go:166] Checking apiserver status ...
	I1209 04:54:40.184504 1230448 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:54:40.199940 1230448 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1440/cgroup
	I1209 04:54:40.208680 1230448 api_server.go:182] apiserver freezer: "4:freezer:/docker/c30e0af482eec5cfb9a6d5438fc8bf9323a2a205890981be7d33a32e11b49996/kubepods/burstable/pode3222c18cec691eda93cc64521c71a52/14d1d7036d83ff15b76a80e0fe47af8720f9e28e992c807a5110b5ec7b75ff15"
	I1209 04:54:40.208771 1230448 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/c30e0af482eec5cfb9a6d5438fc8bf9323a2a205890981be7d33a32e11b49996/kubepods/burstable/pode3222c18cec691eda93cc64521c71a52/14d1d7036d83ff15b76a80e0fe47af8720f9e28e992c807a5110b5ec7b75ff15/freezer.state
	I1209 04:54:40.216846 1230448 api_server.go:204] freezer state: "THAWED"
	I1209 04:54:40.216878 1230448 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1209 04:54:40.225375 1230448 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1209 04:54:40.225406 1230448 status.go:463] ha-853930 apiserver status = Running (err=<nil>)
	I1209 04:54:40.225418 1230448 status.go:176] ha-853930 status: &{Name:ha-853930 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1209 04:54:40.225468 1230448 status.go:174] checking status of ha-853930-m02 ...
	I1209 04:54:40.225823 1230448 cli_runner.go:164] Run: docker container inspect ha-853930-m02 --format={{.State.Status}}
	I1209 04:54:40.243932 1230448 status.go:371] ha-853930-m02 host status = "Stopped" (err=<nil>)
	I1209 04:54:40.243955 1230448 status.go:384] host is not running, skipping remaining checks
	I1209 04:54:40.243963 1230448 status.go:176] ha-853930-m02 status: &{Name:ha-853930-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1209 04:54:40.243985 1230448 status.go:174] checking status of ha-853930-m03 ...
	I1209 04:54:40.244356 1230448 cli_runner.go:164] Run: docker container inspect ha-853930-m03 --format={{.State.Status}}
	I1209 04:54:40.262608 1230448 status.go:371] ha-853930-m03 host status = "Running" (err=<nil>)
	I1209 04:54:40.262654 1230448 host.go:66] Checking if "ha-853930-m03" exists ...
	I1209 04:54:40.263074 1230448 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-853930-m03
	I1209 04:54:40.280365 1230448 host.go:66] Checking if "ha-853930-m03" exists ...
	I1209 04:54:40.280700 1230448 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1209 04:54:40.280752 1230448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-853930-m03
	I1209 04:54:40.298580 1230448 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33915 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/ha-853930-m03/id_rsa Username:docker}
	I1209 04:54:40.416907 1230448 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 04:54:40.432873 1230448 kubeconfig.go:125] found "ha-853930" server: "https://192.168.49.254:8443"
	I1209 04:54:40.432942 1230448 api_server.go:166] Checking apiserver status ...
	I1209 04:54:40.433017 1230448 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:54:40.447985 1230448 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1361/cgroup
	I1209 04:54:40.458204 1230448 api_server.go:182] apiserver freezer: "4:freezer:/docker/e8eb53e15e825e0c19c4d8115ea178d5be98660855440792f7f269ca71086b55/kubepods/burstable/pod44b1fd58ad70eb48e4448a2727f440de/f6e6cf0da09d3eaae8a0b9f459ef315f70c87b81fbe43a5b3a56388cf55962a1"
	I1209 04:54:40.458349 1230448 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/e8eb53e15e825e0c19c4d8115ea178d5be98660855440792f7f269ca71086b55/kubepods/burstable/pod44b1fd58ad70eb48e4448a2727f440de/f6e6cf0da09d3eaae8a0b9f459ef315f70c87b81fbe43a5b3a56388cf55962a1/freezer.state
	I1209 04:54:40.473288 1230448 api_server.go:204] freezer state: "THAWED"
	I1209 04:54:40.473315 1230448 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1209 04:54:40.483023 1230448 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1209 04:54:40.483065 1230448 status.go:463] ha-853930-m03 apiserver status = Running (err=<nil>)
	I1209 04:54:40.483074 1230448 status.go:176] ha-853930-m03 status: &{Name:ha-853930-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1209 04:54:40.483094 1230448 status.go:174] checking status of ha-853930-m04 ...
	I1209 04:54:40.483430 1230448 cli_runner.go:164] Run: docker container inspect ha-853930-m04 --format={{.State.Status}}
	I1209 04:54:40.500376 1230448 status.go:371] ha-853930-m04 host status = "Running" (err=<nil>)
	I1209 04:54:40.500404 1230448 host.go:66] Checking if "ha-853930-m04" exists ...
	I1209 04:54:40.500712 1230448 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-853930-m04
	I1209 04:54:40.518122 1230448 host.go:66] Checking if "ha-853930-m04" exists ...
	I1209 04:54:40.518441 1230448 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1209 04:54:40.518492 1230448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-853930-m04
	I1209 04:54:40.536103 1230448 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33920 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/ha-853930-m04/id_rsa Username:docker}
	I1209 04:54:40.641403 1230448 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 04:54:40.654187 1230448 status.go:176] ha-853930-m04 status: &{Name:ha-853930-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (13.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-853930 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-853930 node start m02 --alsologtostderr -v 5: (11.812426117s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-853930 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-853930 status --alsologtostderr -v 5: (1.497217045s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (13.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.383410834s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (99.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 -p ha-853930 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 -p ha-853930 stop --alsologtostderr -v 5
E1209 04:55:32.751090 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 -p ha-853930 stop --alsologtostderr -v 5: (37.576081596s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 -p ha-853930 start --wait true --alsologtostderr -v 5
E1209 04:55:42.061498 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-717497/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 04:56:00.452757 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 04:56:06.731967 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/addons-221952/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 -p ha-853930 start --wait true --alsologtostderr -v 5: (1m1.923114339s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 -p ha-853930 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (99.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.22s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-853930 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-853930 node delete m03 --alsologtostderr -v 5: (10.25679494s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-853930 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.22s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-853930 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-853930 stop --alsologtostderr -v 5: (36.465537468s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-853930 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-853930 status --alsologtostderr -v 5: exit status 7 (118.25914ms)

                                                
                                                
-- stdout --
	ha-853930
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-853930-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-853930-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 04:57:24.463355 1245326 out.go:360] Setting OutFile to fd 1 ...
	I1209 04:57:24.463586 1245326 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 04:57:24.463600 1245326 out.go:374] Setting ErrFile to fd 2...
	I1209 04:57:24.463605 1245326 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 04:57:24.463888 1245326 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-1142328/.minikube/bin
	I1209 04:57:24.464137 1245326 out.go:368] Setting JSON to false
	I1209 04:57:24.464182 1245326 mustload.go:66] Loading cluster: ha-853930
	I1209 04:57:24.464275 1245326 notify.go:221] Checking for updates...
	I1209 04:57:24.464644 1245326 config.go:182] Loaded profile config "ha-853930": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1209 04:57:24.464664 1245326 status.go:174] checking status of ha-853930 ...
	I1209 04:57:24.465463 1245326 cli_runner.go:164] Run: docker container inspect ha-853930 --format={{.State.Status}}
	I1209 04:57:24.482813 1245326 status.go:371] ha-853930 host status = "Stopped" (err=<nil>)
	I1209 04:57:24.482835 1245326 status.go:384] host is not running, skipping remaining checks
	I1209 04:57:24.482841 1245326 status.go:176] ha-853930 status: &{Name:ha-853930 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1209 04:57:24.482867 1245326 status.go:174] checking status of ha-853930-m02 ...
	I1209 04:57:24.483162 1245326 cli_runner.go:164] Run: docker container inspect ha-853930-m02 --format={{.State.Status}}
	I1209 04:57:24.508190 1245326 status.go:371] ha-853930-m02 host status = "Stopped" (err=<nil>)
	I1209 04:57:24.508214 1245326 status.go:384] host is not running, skipping remaining checks
	I1209 04:57:24.508232 1245326 status.go:176] ha-853930-m02 status: &{Name:ha-853930-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1209 04:57:24.508256 1245326 status.go:174] checking status of ha-853930-m04 ...
	I1209 04:57:24.508568 1245326 cli_runner.go:164] Run: docker container inspect ha-853930-m04 --format={{.State.Status}}
	I1209 04:57:24.531215 1245326 status.go:371] ha-853930-m04 host status = "Stopped" (err=<nil>)
	I1209 04:57:24.531238 1245326 status.go:384] host is not running, skipping remaining checks
	I1209 04:57:24.531245 1245326 status.go:176] ha-853930-m04 status: &{Name:ha-853930-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (58.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 -p ha-853930 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd
E1209 04:57:38.984992 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-717497/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 -p ha-853930 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd: (57.83451376s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-853930 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (58.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (79.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 -p ha-853930 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 -p ha-853930 node add --control-plane --alsologtostderr -v 5: (1m18.799962243s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-853930 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-853930 status --alsologtostderr -v 5: (1.081382143s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (79.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.139466091s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.14s)

                                                
                                    
x
+
TestJSONOutput/start/Command (77.44s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-248924 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=containerd
E1209 05:00:32.752147 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 05:01:06.733486 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/addons-221952/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-248924 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=containerd: (1m17.437780375s)
--- PASS: TestJSONOutput/start/Command (77.44s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.75s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-248924 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.75s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.63s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-248924 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.63s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.97s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-248924 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-248924 --output=json --user=testUser: (5.968251772s)
--- PASS: TestJSONOutput/stop/Command (5.97s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-568245 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-568245 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (92.607604ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"cd6d0dfa-548d-4721-8131-bacfff7a2e53","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-568245] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"b0d91595-7c4d-49a9-9ac8-0dde04d0be1a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22081"}}
	{"specversion":"1.0","id":"9727262d-6059-432f-ba51-100706cc9ee1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"db306c0e-1f71-4f9f-9869-942afc8e260f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22081-1142328/kubeconfig"}}
	{"specversion":"1.0","id":"cf1cd4b8-6209-4a31-8d65-6efcf8f6928f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-1142328/.minikube"}}
	{"specversion":"1.0","id":"606bd590-1995-4e1c-8a9b-b502b847c68b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"4d992beb-7d27-42b6-8e74-9b75a708e8f3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"7de3a62a-cd0e-40b5-a8bc-011cbd678781","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-568245" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-568245
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (54.16s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-074639 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-074639 --network=: (51.912805057s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-074639" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-074639
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-074639: (2.224276161s)
--- PASS: TestKicCustomNetwork/create_custom_network (54.16s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (36.51s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-470001 --network=bridge
E1209 05:02:38.985989 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-717497/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-470001 --network=bridge: (34.343181696s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-470001" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-470001
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-470001: (2.139708146s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (36.51s)

                                                
                                    
x
+
TestKicExistingNetwork (37.4s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1209 05:02:53.668437 1144231 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1209 05:02:53.684964 1144231 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1209 05:02:53.685064 1144231 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1209 05:02:53.685087 1144231 cli_runner.go:164] Run: docker network inspect existing-network
W1209 05:02:53.703146 1144231 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1209 05:02:53.703180 1144231 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1209 05:02:53.703197 1144231 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1209 05:02:53.703301 1144231 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1209 05:02:53.721629 1144231 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-7a15eec16b1a IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:8a:b7:58:bc:12:6c} reservation:<nil>}
I1209 05:02:53.721985 1144231 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001875420}
I1209 05:02:53.722009 1144231 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1209 05:02:53.722061 1144231 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1209 05:02:53.783125 1144231 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-145420 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-145420 --network=existing-network: (35.111926378s)
helpers_test.go:175: Cleaning up "existing-network-145420" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-145420
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-145420: (2.138888087s)
I1209 05:03:31.050331 1144231 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (37.40s)

                                                
                                    
x
+
TestKicCustomSubnet (34.08s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-760764 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-760764 --subnet=192.168.60.0/24: (31.878963134s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-760764 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-760764" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-760764
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-760764: (2.17817298s)
--- PASS: TestKicCustomSubnet (34.08s)

                                                
                                    
x
+
TestKicStaticIP (36.16s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-236902 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-236902 --static-ip=192.168.200.200: (33.765411766s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-236902 ip
helpers_test.go:175: Cleaning up "static-ip-236902" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-236902
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-236902: (2.244830026s)
--- PASS: TestKicStaticIP (36.16s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (70.63s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-949295 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-949295 --driver=docker  --container-runtime=containerd: (31.796086303s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-951781 --driver=docker  --container-runtime=containerd
E1209 05:05:32.752163 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-951781 --driver=docker  --container-runtime=containerd: (33.193845352s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-949295
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-951781
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-951781" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-951781
E1209 05:05:49.816187 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/addons-221952/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-951781: (2.147508274s)
helpers_test.go:175: Cleaning up "first-949295" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-949295
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-949295: (2.062549597s)
--- PASS: TestMinikubeProfile (70.63s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (8.53s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-766201 --memory=3072 --mount-string /tmp/TestMountStartserial1313581607/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-766201 --memory=3072 --mount-string /tmp/TestMountStartserial1313581607/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (7.533647816s)
--- PASS: TestMountStart/serial/StartWithMountFirst (8.53s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-766201 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (8.66s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-767990 --memory=3072 --mount-string /tmp/TestMountStartserial1313581607/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
E1209 05:06:06.732202 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/addons-221952/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-767990 --memory=3072 --mount-string /tmp/TestMountStartserial1313581607/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (7.657572422s)
--- PASS: TestMountStart/serial/StartWithMountSecond (8.66s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-767990 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.72s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-766201 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-766201 --alsologtostderr -v=5: (1.722103808s)
--- PASS: TestMountStart/serial/DeleteFirst (1.72s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-767990 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-767990
mount_start_test.go:196: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-767990: (1.289936595s)
--- PASS: TestMountStart/serial/Stop (1.29s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.37s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-767990
mount_start_test.go:207: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-767990: (6.371500045s)
--- PASS: TestMountStart/serial/RestartStopped (7.37s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-767990 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (107.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-900963 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd
E1209 05:06:55.814083 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 05:07:38.986041 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-717497/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-900963 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m47.369898115s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-900963 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (107.90s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-900963 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-900963 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-900963 -- rollout status deployment/busybox: (3.13918279s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-900963 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-900963 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-900963 -- exec busybox-7b57f96db7-p5bql -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-900963 -- exec busybox-7b57f96db7-ptzsb -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-900963 -- exec busybox-7b57f96db7-p5bql -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-900963 -- exec busybox-7b57f96db7-ptzsb -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-900963 -- exec busybox-7b57f96db7-p5bql -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-900963 -- exec busybox-7b57f96db7-ptzsb -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.96s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-900963 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-900963 -- exec busybox-7b57f96db7-p5bql -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-900963 -- exec busybox-7b57f96db7-p5bql -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-900963 -- exec busybox-7b57f96db7-ptzsb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-900963 -- exec busybox-7b57f96db7-ptzsb -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.01s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (28.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-900963 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-900963 -v=5 --alsologtostderr: (27.390519268s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-900963 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (28.08s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-900963 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.82s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-900963 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-900963 cp testdata/cp-test.txt multinode-900963:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-900963 ssh -n multinode-900963 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-900963 cp multinode-900963:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3167582637/001/cp-test_multinode-900963.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-900963 ssh -n multinode-900963 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-900963 cp multinode-900963:/home/docker/cp-test.txt multinode-900963-m02:/home/docker/cp-test_multinode-900963_multinode-900963-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-900963 ssh -n multinode-900963 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-900963 ssh -n multinode-900963-m02 "sudo cat /home/docker/cp-test_multinode-900963_multinode-900963-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-900963 cp multinode-900963:/home/docker/cp-test.txt multinode-900963-m03:/home/docker/cp-test_multinode-900963_multinode-900963-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-900963 ssh -n multinode-900963 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-900963 ssh -n multinode-900963-m03 "sudo cat /home/docker/cp-test_multinode-900963_multinode-900963-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-900963 cp testdata/cp-test.txt multinode-900963-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-900963 ssh -n multinode-900963-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-900963 cp multinode-900963-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3167582637/001/cp-test_multinode-900963-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-900963 ssh -n multinode-900963-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-900963 cp multinode-900963-m02:/home/docker/cp-test.txt multinode-900963:/home/docker/cp-test_multinode-900963-m02_multinode-900963.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-900963 ssh -n multinode-900963-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-900963 ssh -n multinode-900963 "sudo cat /home/docker/cp-test_multinode-900963-m02_multinode-900963.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-900963 cp multinode-900963-m02:/home/docker/cp-test.txt multinode-900963-m03:/home/docker/cp-test_multinode-900963-m02_multinode-900963-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-900963 ssh -n multinode-900963-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-900963 ssh -n multinode-900963-m03 "sudo cat /home/docker/cp-test_multinode-900963-m02_multinode-900963-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-900963 cp testdata/cp-test.txt multinode-900963-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-900963 ssh -n multinode-900963-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-900963 cp multinode-900963-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3167582637/001/cp-test_multinode-900963-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-900963 ssh -n multinode-900963-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-900963 cp multinode-900963-m03:/home/docker/cp-test.txt multinode-900963:/home/docker/cp-test_multinode-900963-m03_multinode-900963.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-900963 ssh -n multinode-900963-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-900963 ssh -n multinode-900963 "sudo cat /home/docker/cp-test_multinode-900963-m03_multinode-900963.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-900963 cp multinode-900963-m03:/home/docker/cp-test.txt multinode-900963-m02:/home/docker/cp-test_multinode-900963-m03_multinode-900963-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-900963 ssh -n multinode-900963-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-900963 ssh -n multinode-900963-m02 "sudo cat /home/docker/cp-test_multinode-900963-m03_multinode-900963-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.43s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-900963 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-900963 node stop m03: (1.31087581s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-900963 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-900963 status: exit status 7 (546.666624ms)

                                                
                                                
-- stdout --
	multinode-900963
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-900963-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-900963-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-900963 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-900963 status --alsologtostderr: exit status 7 (523.698815ms)

                                                
                                                
-- stdout --
	multinode-900963
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-900963-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-900963-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 05:08:57.719327 1298502 out.go:360] Setting OutFile to fd 1 ...
	I1209 05:08:57.719506 1298502 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 05:08:57.719519 1298502 out.go:374] Setting ErrFile to fd 2...
	I1209 05:08:57.719525 1298502 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 05:08:57.719797 1298502 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-1142328/.minikube/bin
	I1209 05:08:57.720003 1298502 out.go:368] Setting JSON to false
	I1209 05:08:57.720082 1298502 mustload.go:66] Loading cluster: multinode-900963
	I1209 05:08:57.720175 1298502 notify.go:221] Checking for updates...
	I1209 05:08:57.720516 1298502 config.go:182] Loaded profile config "multinode-900963": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1209 05:08:57.720535 1298502 status.go:174] checking status of multinode-900963 ...
	I1209 05:08:57.721388 1298502 cli_runner.go:164] Run: docker container inspect multinode-900963 --format={{.State.Status}}
	I1209 05:08:57.740747 1298502 status.go:371] multinode-900963 host status = "Running" (err=<nil>)
	I1209 05:08:57.740773 1298502 host.go:66] Checking if "multinode-900963" exists ...
	I1209 05:08:57.741055 1298502 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-900963
	I1209 05:08:57.761863 1298502 host.go:66] Checking if "multinode-900963" exists ...
	I1209 05:08:57.762169 1298502 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1209 05:08:57.762216 1298502 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-900963
	I1209 05:08:57.782327 1298502 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34025 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/multinode-900963/id_rsa Username:docker}
	I1209 05:08:57.885353 1298502 ssh_runner.go:195] Run: systemctl --version
	I1209 05:08:57.892053 1298502 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 05:08:57.904849 1298502 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 05:08:57.961897 1298502 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-09 05:08:57.952523559 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1209 05:08:57.962482 1298502 kubeconfig.go:125] found "multinode-900963" server: "https://192.168.67.2:8443"
	I1209 05:08:57.962545 1298502 api_server.go:166] Checking apiserver status ...
	I1209 05:08:57.962594 1298502 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:08:57.974600 1298502 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1426/cgroup
	I1209 05:08:57.983069 1298502 api_server.go:182] apiserver freezer: "4:freezer:/docker/d4eca6fd21f344af10a2f2b712d4a625eeeba5af97585e7148802c8a36d96f95/kubepods/burstable/podb6e99c26b5edc0410e3b49628b4210a1/496a549dfec7c1b5cf554a6db7d8cd696ec1c00d00b091187b4af27066b7d182"
	I1209 05:08:57.983146 1298502 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/d4eca6fd21f344af10a2f2b712d4a625eeeba5af97585e7148802c8a36d96f95/kubepods/burstable/podb6e99c26b5edc0410e3b49628b4210a1/496a549dfec7c1b5cf554a6db7d8cd696ec1c00d00b091187b4af27066b7d182/freezer.state
	I1209 05:08:57.992337 1298502 api_server.go:204] freezer state: "THAWED"
	I1209 05:08:57.992364 1298502 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1209 05:08:58.000818 1298502 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1209 05:08:58.000852 1298502 status.go:463] multinode-900963 apiserver status = Running (err=<nil>)
	I1209 05:08:58.000863 1298502 status.go:176] multinode-900963 status: &{Name:multinode-900963 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1209 05:08:58.000882 1298502 status.go:174] checking status of multinode-900963-m02 ...
	I1209 05:08:58.001278 1298502 cli_runner.go:164] Run: docker container inspect multinode-900963-m02 --format={{.State.Status}}
	I1209 05:08:58.020190 1298502 status.go:371] multinode-900963-m02 host status = "Running" (err=<nil>)
	I1209 05:08:58.020213 1298502 host.go:66] Checking if "multinode-900963-m02" exists ...
	I1209 05:08:58.020546 1298502 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-900963-m02
	I1209 05:08:58.037748 1298502 host.go:66] Checking if "multinode-900963-m02" exists ...
	I1209 05:08:58.038071 1298502 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1209 05:08:58.038115 1298502 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-900963-m02
	I1209 05:08:58.056686 1298502 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34030 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/multinode-900963-m02/id_rsa Username:docker}
	I1209 05:08:58.161313 1298502 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 05:08:58.173495 1298502 status.go:176] multinode-900963-m02 status: &{Name:multinode-900963-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1209 05:08:58.173525 1298502 status.go:174] checking status of multinode-900963-m03 ...
	I1209 05:08:58.173823 1298502 cli_runner.go:164] Run: docker container inspect multinode-900963-m03 --format={{.State.Status}}
	I1209 05:08:58.190536 1298502 status.go:371] multinode-900963-m03 host status = "Stopped" (err=<nil>)
	I1209 05:08:58.190555 1298502 status.go:384] host is not running, skipping remaining checks
	I1209 05:08:58.190561 1298502 status.go:176] multinode-900963-m03 status: &{Name:multinode-900963-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.38s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (7.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-900963 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-900963 node start m03 -v=5 --alsologtostderr: (7.083011481s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-900963 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (7.88s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (74.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-900963
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-900963
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-900963: (25.09289861s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-900963 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-900963 --wait=true -v=5 --alsologtostderr: (49.282702105s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-900963
--- PASS: TestMultiNode/serial/RestartKeepsNodes (74.50s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-900963 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-900963 node delete m03: (4.967094559s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-900963 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.66s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-900963 stop
E1209 05:10:32.751098 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-900963 stop: (23.949481675s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-900963 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-900963 status: exit status 7 (96.430622ms)

                                                
                                                
-- stdout --
	multinode-900963
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-900963-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-900963 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-900963 status --alsologtostderr: exit status 7 (100.375012ms)

                                                
                                                
-- stdout --
	multinode-900963
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-900963-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 05:10:50.319891 1307260 out.go:360] Setting OutFile to fd 1 ...
	I1209 05:10:50.320091 1307260 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 05:10:50.320122 1307260 out.go:374] Setting ErrFile to fd 2...
	I1209 05:10:50.320144 1307260 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 05:10:50.320429 1307260 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-1142328/.minikube/bin
	I1209 05:10:50.320651 1307260 out.go:368] Setting JSON to false
	I1209 05:10:50.320720 1307260 mustload.go:66] Loading cluster: multinode-900963
	I1209 05:10:50.320807 1307260 notify.go:221] Checking for updates...
	I1209 05:10:50.321170 1307260 config.go:182] Loaded profile config "multinode-900963": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1209 05:10:50.321208 1307260 status.go:174] checking status of multinode-900963 ...
	I1209 05:10:50.322059 1307260 cli_runner.go:164] Run: docker container inspect multinode-900963 --format={{.State.Status}}
	I1209 05:10:50.342083 1307260 status.go:371] multinode-900963 host status = "Stopped" (err=<nil>)
	I1209 05:10:50.342109 1307260 status.go:384] host is not running, skipping remaining checks
	I1209 05:10:50.342116 1307260 status.go:176] multinode-900963 status: &{Name:multinode-900963 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1209 05:10:50.342147 1307260 status.go:174] checking status of multinode-900963-m02 ...
	I1209 05:10:50.342447 1307260 cli_runner.go:164] Run: docker container inspect multinode-900963-m02 --format={{.State.Status}}
	I1209 05:10:50.373247 1307260 status.go:371] multinode-900963-m02 host status = "Stopped" (err=<nil>)
	I1209 05:10:50.373269 1307260 status.go:384] host is not running, skipping remaining checks
	I1209 05:10:50.373276 1307260 status.go:176] multinode-900963-m02 status: &{Name:multinode-900963-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.15s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (52.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-900963 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd
E1209 05:11:06.731584 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/addons-221952/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-900963 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd: (51.524940274s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-900963 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (52.29s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (34.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-900963
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-900963-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-900963-m02 --driver=docker  --container-runtime=containerd: exit status 14 (88.659514ms)

                                                
                                                
-- stdout --
	* [multinode-900963-m02] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22081
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22081-1142328/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-1142328/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-900963-m02' is duplicated with machine name 'multinode-900963-m02' in profile 'multinode-900963'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-900963-m03 --driver=docker  --container-runtime=containerd
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-900963-m03 --driver=docker  --container-runtime=containerd: (31.974739721s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-900963
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-900963: exit status 80 (364.029728ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-900963 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-900963-m03 already exists in multinode-900963-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-900963-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-900963-m03: (2.431437369s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (34.92s)

                                                
                                    
x
+
TestPreload (121.15s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:41: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-241370 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd
E1209 05:12:22.063537 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-717497/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 05:12:38.985727 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-717497/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:41: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-241370 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd: (56.470070622s)
preload_test.go:49: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-241370 image pull gcr.io/k8s-minikube/busybox
preload_test.go:49: (dbg) Done: out/minikube-linux-arm64 -p test-preload-241370 image pull gcr.io/k8s-minikube/busybox: (2.35294218s)
preload_test.go:55: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-241370
preload_test.go:55: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-241370: (5.931143748s)
preload_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-241370 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
preload_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-241370 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (53.655464536s)
preload_test.go:68: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-241370 image list
helpers_test.go:175: Cleaning up "test-preload-241370" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-241370
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-241370: (2.500952726s)
--- PASS: TestPreload (121.15s)

                                                
                                    
x
+
TestScheduledStopUnix (110.39s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-631027 --memory=3072 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-631027 --memory=3072 --driver=docker  --container-runtime=containerd: (34.3257175s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-631027 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1209 05:14:57.304966 1323131 out.go:360] Setting OutFile to fd 1 ...
	I1209 05:14:57.305310 1323131 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 05:14:57.305337 1323131 out.go:374] Setting ErrFile to fd 2...
	I1209 05:14:57.305356 1323131 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 05:14:57.305638 1323131 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-1142328/.minikube/bin
	I1209 05:14:57.305929 1323131 out.go:368] Setting JSON to false
	I1209 05:14:57.306093 1323131 mustload.go:66] Loading cluster: scheduled-stop-631027
	I1209 05:14:57.306486 1323131 config.go:182] Loaded profile config "scheduled-stop-631027": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1209 05:14:57.306580 1323131 profile.go:143] Saving config to /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/scheduled-stop-631027/config.json ...
	I1209 05:14:57.306836 1323131 mustload.go:66] Loading cluster: scheduled-stop-631027
	I1209 05:14:57.307002 1323131 config.go:182] Loaded profile config "scheduled-stop-631027": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-631027 -n scheduled-stop-631027
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-631027 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1209 05:14:57.756629 1323222 out.go:360] Setting OutFile to fd 1 ...
	I1209 05:14:57.756779 1323222 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 05:14:57.756791 1323222 out.go:374] Setting ErrFile to fd 2...
	I1209 05:14:57.756796 1323222 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 05:14:57.757070 1323222 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-1142328/.minikube/bin
	I1209 05:14:57.757304 1323222 out.go:368] Setting JSON to false
	I1209 05:14:57.757508 1323222 daemonize_unix.go:73] killing process 1323153 as it is an old scheduled stop
	I1209 05:14:57.757637 1323222 mustload.go:66] Loading cluster: scheduled-stop-631027
	I1209 05:14:57.758077 1323222 config.go:182] Loaded profile config "scheduled-stop-631027": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1209 05:14:57.758168 1323222 profile.go:143] Saving config to /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/scheduled-stop-631027/config.json ...
	I1209 05:14:57.758358 1323222 mustload.go:66] Loading cluster: scheduled-stop-631027
	I1209 05:14:57.758491 1323222 config.go:182] Loaded profile config "scheduled-stop-631027": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:180: process 1323153 is a zombie
I1209 05:14:57.765390 1144231 retry.go:31] will retry after 58.423µs: open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/scheduled-stop-631027/pid: no such file or directory
I1209 05:14:57.766818 1144231 retry.go:31] will retry after 79.142µs: open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/scheduled-stop-631027/pid: no such file or directory
I1209 05:14:57.767922 1144231 retry.go:31] will retry after 276.114µs: open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/scheduled-stop-631027/pid: no such file or directory
I1209 05:14:57.769022 1144231 retry.go:31] will retry after 355.201µs: open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/scheduled-stop-631027/pid: no such file or directory
I1209 05:14:57.770150 1144231 retry.go:31] will retry after 608.078µs: open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/scheduled-stop-631027/pid: no such file or directory
I1209 05:14:57.771275 1144231 retry.go:31] will retry after 1.096799ms: open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/scheduled-stop-631027/pid: no such file or directory
I1209 05:14:57.773467 1144231 retry.go:31] will retry after 794.722µs: open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/scheduled-stop-631027/pid: no such file or directory
I1209 05:14:57.774538 1144231 retry.go:31] will retry after 2.02191ms: open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/scheduled-stop-631027/pid: no such file or directory
I1209 05:14:57.777819 1144231 retry.go:31] will retry after 2.413975ms: open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/scheduled-stop-631027/pid: no such file or directory
I1209 05:14:57.781174 1144231 retry.go:31] will retry after 4.50578ms: open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/scheduled-stop-631027/pid: no such file or directory
I1209 05:14:57.786348 1144231 retry.go:31] will retry after 3.84481ms: open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/scheduled-stop-631027/pid: no such file or directory
I1209 05:14:57.790552 1144231 retry.go:31] will retry after 5.630871ms: open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/scheduled-stop-631027/pid: no such file or directory
I1209 05:14:57.796748 1144231 retry.go:31] will retry after 17.872674ms: open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/scheduled-stop-631027/pid: no such file or directory
I1209 05:14:57.814936 1144231 retry.go:31] will retry after 18.319935ms: open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/scheduled-stop-631027/pid: no such file or directory
I1209 05:14:57.834102 1144231 retry.go:31] will retry after 23.135774ms: open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/scheduled-stop-631027/pid: no such file or directory
I1209 05:14:57.857478 1144231 retry.go:31] will retry after 49.457296ms: open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/scheduled-stop-631027/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-631027 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-631027 -n scheduled-stop-631027
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-631027
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-631027 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1209 05:15:23.711199 1323895 out.go:360] Setting OutFile to fd 1 ...
	I1209 05:15:23.711398 1323895 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 05:15:23.711429 1323895 out.go:374] Setting ErrFile to fd 2...
	I1209 05:15:23.711451 1323895 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 05:15:23.711701 1323895 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-1142328/.minikube/bin
	I1209 05:15:23.711967 1323895 out.go:368] Setting JSON to false
	I1209 05:15:23.712133 1323895 mustload.go:66] Loading cluster: scheduled-stop-631027
	I1209 05:15:23.712562 1323895 config.go:182] Loaded profile config "scheduled-stop-631027": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1209 05:15:23.712671 1323895 profile.go:143] Saving config to /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/scheduled-stop-631027/config.json ...
	I1209 05:15:23.712892 1323895 mustload.go:66] Loading cluster: scheduled-stop-631027
	I1209 05:15:23.713050 1323895 config.go:182] Loaded profile config "scheduled-stop-631027": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
E1209 05:15:32.751173 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:172: signal error was:  os: process already finished
E1209 05:16:06.732264 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/addons-221952/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-631027
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-631027: exit status 7 (68.988575ms)

                                                
                                                
-- stdout --
	scheduled-stop-631027
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-631027 -n scheduled-stop-631027
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-631027 -n scheduled-stop-631027: exit status 7 (75.872877ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-631027" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-631027
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-631027: (4.45038385s)
--- PASS: TestScheduledStopUnix (110.39s)

                                                
                                    
x
+
TestInsufficientStorage (12.15s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-592881 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-592881 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (9.576211608s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"b39b9ede-d2ed-4603-ace8-0f2cf927054b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-592881] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"5fcc9d09-4db6-4bd4-95f5-6b8be4b63a43","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22081"}}
	{"specversion":"1.0","id":"2bfb20a0-8665-4f77-800e-c9b66e9c5a09","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"abbcee4c-7707-4e4b-8f10-23ae0a22dc29","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22081-1142328/kubeconfig"}}
	{"specversion":"1.0","id":"4c579d67-69a3-407b-adbe-cde04e1c9488","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-1142328/.minikube"}}
	{"specversion":"1.0","id":"3e84832a-5ba9-47fd-b719-ada2065c08f4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"52d811b2-81c6-4b55-bb9b-91270b0aa9c1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"50b74570-d0f8-42b8-bcec-b06883d3927d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"c66917ad-0a0c-44d5-af29-c0afd2090849","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"83a1f6e9-6a57-4dee-9cbe-f65337786803","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"4a30b6d3-b122-44ef-a2fd-032ca0a3399e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"ee94f23a-8a6e-4f35-84f7-869cf8b7031c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-592881\" primary control-plane node in \"insufficient-storage-592881\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"382a96c9-0604-4fcc-8d24-267fdead6488","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1765184860-22066 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"750f4192-e16d-4e17-826c-a84c43bcea82","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"d5cf3dae-43e9-4b88-9d66-576c97f87ea1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-592881 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-592881 --output=json --layout=cluster: exit status 7 (294.122378ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-592881","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-592881","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1209 05:16:23.168374 1325718 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-592881" does not appear in /home/jenkins/minikube-integration/22081-1142328/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-592881 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-592881 --output=json --layout=cluster: exit status 7 (321.458968ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-592881","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-592881","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1209 05:16:23.488528 1325787 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-592881" does not appear in /home/jenkins/minikube-integration/22081-1142328/kubeconfig
	E1209 05:16:23.498955 1325787 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/insufficient-storage-592881/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-592881" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-592881
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-592881: (1.953926373s)
--- PASS: TestInsufficientStorage (12.15s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (311.55s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.35.0.552825654 start -p running-upgrade-571370 --memory=3072 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.35.0.552825654 start -p running-upgrade-571370 --memory=3072 --vm-driver=docker  --container-runtime=containerd: (30.977689532s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-571370 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E1209 05:25:32.751082 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 05:26:06.731807 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/addons-221952/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 05:27:38.985957 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-717497/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 05:29:02.065200 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-717497/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-571370 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m36.348093866s)
helpers_test.go:175: Cleaning up "running-upgrade-571370" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-571370
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-571370: (1.986665844s)
--- PASS: TestRunningBinaryUpgrade (311.55s)

                                                
                                    
x
+
TestMissingContainerUpgrade (163.19s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.35.0.1851080525 start -p missing-upgrade-253761 --memory=3072 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.35.0.1851080525 start -p missing-upgrade-253761 --memory=3072 --driver=docker  --container-runtime=containerd: (1m4.302831185s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-253761
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-253761
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-253761 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-253761 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m32.807886979s)
helpers_test.go:175: Cleaning up "missing-upgrade-253761" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-253761
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-253761: (1.951444913s)
--- PASS: TestMissingContainerUpgrade (163.19s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-284947 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-284947 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd: exit status 14 (87.223851ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-284947] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22081
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22081-1142328/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-1142328/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (45.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-284947 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-284947 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (44.79540921s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-284947 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (45.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (8.43s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-284947 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-284947 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (5.902467427s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-284947 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-284947 status -o json: exit status 2 (396.766121ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-284947","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-284947
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-284947: (2.134180749s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (8.43s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (8.59s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-284947 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-284947 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (8.589375524s)
--- PASS: TestNoKubernetes/serial/Start (8.59s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/linux/arm64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-284947 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-284947 "sudo systemctl is-active --quiet service kubelet": exit status 1 (315.286579ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.59s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.59s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.44s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-284947
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-284947: (1.4417942s)
--- PASS: TestNoKubernetes/serial/Stop (1.44s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.69s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-284947 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-284947 --driver=docker  --container-runtime=containerd: (6.686475998s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.69s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-284947 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-284947 "sudo systemctl is-active --quiet service kubelet": exit status 1 (289.698842ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.29s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.75s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.75s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (304.06s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.35.0.3881999616 start -p stopped-upgrade-774042 --memory=3072 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.35.0.3881999616 start -p stopped-upgrade-774042 --memory=3072 --vm-driver=docker  --container-runtime=containerd: (35.516179221s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.35.0.3881999616 -p stopped-upgrade-774042 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.35.0.3881999616 -p stopped-upgrade-774042 stop: (1.27187194s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-774042 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E1209 05:20:32.751662 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 05:21:06.732336 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/addons-221952/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 05:22:29.817562 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/addons-221952/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 05:22:38.988299 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-717497/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 05:23:35.816184 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-774042 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m27.275797155s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (304.06s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.86s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-774042
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-774042: (1.858412806s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.86s)

                                                
                                    
x
+
TestPause/serial/Start (54.82s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-523987 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-523987 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (54.8220329s)
--- PASS: TestPause/serial/Start (54.82s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (6.14s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-523987 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-523987 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (6.127977426s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (6.14s)

                                                
                                    
x
+
TestPause/serial/Pause (0.73s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-523987 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.73s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.32s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-523987 --output=json --layout=cluster
E1209 05:30:32.751721 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-523987 --output=json --layout=cluster: exit status 2 (321.1608ms)

                                                
                                                
-- stdout --
	{"Name":"pause-523987","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, istio-operator","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-523987","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.32s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.63s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-523987 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.63s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.82s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-523987 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.82s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.78s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-523987 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-523987 --alsologtostderr -v=5: (2.781023778s)
--- PASS: TestPause/serial/DeletePaused (2.78s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.4s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-523987
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-523987: exit status 1 (18.586726ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-523987: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.40s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (55.39s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-384009 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0
E1209 05:32:38.985641 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-717497/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-384009 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0: (55.39302388s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (55.39s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.45s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-384009 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [98d3f328-3a68-4da8-a778-fa1fc1ca6745] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [98d3f328-3a68-4da8-a778-fa1fc1ca6745] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.003140838s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-384009 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.45s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-384009 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-384009 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.049153463s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-384009 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-384009 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-384009 --alsologtostderr -v=3: (12.112794851s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-384009 -n old-k8s-version-384009
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-384009 -n old-k8s-version-384009: exit status 7 (76.132708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-384009 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (49.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-384009 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-384009 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0: (48.606276351s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-384009 -n old-k8s-version-384009
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (49.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-xsjff" [58f5afe5-3717-4461-a8a5-4e3953e154a1] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003524163s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-xsjff" [58f5afe5-3717-4461-a8a5-4e3953e154a1] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003275485s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-384009 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-384009 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-384009 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-384009 -n old-k8s-version-384009
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-384009 -n old-k8s-version-384009: exit status 2 (335.298541ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-384009 -n old-k8s-version-384009
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-384009 -n old-k8s-version-384009: exit status 2 (348.508292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-384009 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-384009 -n old-k8s-version-384009
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-384009 -n old-k8s-version-384009
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (83.4s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-432108 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-432108 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2: (1m23.398993912s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (83.40s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-432108 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [da990fc9-1900-4b82-aec8-04172993c6b7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [da990fc9-1900-4b82-aec8-04172993c6b7] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.00315943s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-432108 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.32s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-432108 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-432108 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-432108 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-432108 --alsologtostderr -v=3: (12.105529685s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-432108 -n embed-certs-432108
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-432108 -n embed-certs-432108: exit status 7 (86.323917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-432108 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (49.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-432108 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-432108 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2: (49.007483811s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-432108 -n embed-certs-432108
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (49.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-j9qfk" [70507214-4685-4902-afbf-8e055527fea1] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004030016s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-j9qfk" [70507214-4685-4902-afbf-8e055527fea1] Running
E1209 05:37:38.985130 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-717497/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003213675s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-432108 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-432108 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-432108 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-432108 -n embed-certs-432108
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-432108 -n embed-certs-432108: exit status 2 (351.360602ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-432108 -n embed-certs-432108
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-432108 -n embed-certs-432108: exit status 2 (313.807084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-432108 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-432108 -n embed-certs-432108
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-432108 -n embed-certs-432108
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (79.77s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-564611 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2
E1209 05:38:26.932535 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/old-k8s-version-384009/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 05:38:26.938922 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/old-k8s-version-384009/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 05:38:26.950295 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/old-k8s-version-384009/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 05:38:26.971746 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/old-k8s-version-384009/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 05:38:27.013318 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/old-k8s-version-384009/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 05:38:27.094788 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/old-k8s-version-384009/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 05:38:27.256360 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/old-k8s-version-384009/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 05:38:27.577832 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/old-k8s-version-384009/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 05:38:28.219188 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/old-k8s-version-384009/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 05:38:29.501538 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/old-k8s-version-384009/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 05:38:32.062998 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/old-k8s-version-384009/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 05:38:37.185245 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/old-k8s-version-384009/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 05:38:47.426601 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/old-k8s-version-384009/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 05:39:07.908479 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/old-k8s-version-384009/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-564611 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2: (1m19.766454145s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (79.77s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.36s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-564611 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [26fe724a-356f-4b63-86d2-14dd6cd112fb] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1209 05:39:09.819832 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/addons-221952/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "busybox" [26fe724a-356f-4b63-86d2-14dd6cd112fb] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.002880608s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-564611 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.36s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.05s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-564611 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-564611 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.05s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-564611 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-564611 --alsologtostderr -v=3: (12.096563273s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-564611 -n default-k8s-diff-port-564611
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-564611 -n default-k8s-diff-port-564611: exit status 7 (67.489329ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-564611 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (51.87s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-564611 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2
E1209 05:39:48.870825 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/old-k8s-version-384009/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 05:40:15.817608 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-564611 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2: (51.513269221s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-564611 -n default-k8s-diff-port-564611
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (51.87s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-jjhdd" [33491ebe-6146-4035-bf3d-9f61c5035bd2] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.002656806s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-jjhdd" [33491ebe-6146-4035-bf3d-9f61c5035bd2] Running
E1209 05:40:32.751475 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.002855496s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-564611 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-564611 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-564611 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-564611 -n default-k8s-diff-port-564611
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-564611 -n default-k8s-diff-port-564611: exit status 2 (335.813333ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-564611 -n default-k8s-diff-port-564611
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-564611 -n default-k8s-diff-port-564611: exit status 2 (326.38146ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-564611 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-564611 -n default-k8s-diff-port-564611
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-564611 -n default-k8s-diff-port-564611
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (1.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-842269 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-842269 --alsologtostderr -v=3: (1.297606907s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (1.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-842269 -n no-preload-842269
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-842269 -n no-preload-842269: exit status 7 (65.827933ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-842269 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.31s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-262540 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-262540 --alsologtostderr -v=3: (1.310981076s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.31s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-262540 -n newest-cni-262540
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-262540 -n newest-cni-262540: exit status 7 (66.855946ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-262540 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-262540 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                    

Test skip (36/369)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.2/cached-images 0
15 TestDownloadOnly/v1.34.2/binaries 0
16 TestDownloadOnly/v1.34.2/kubectl 0
23 TestDownloadOnly/v1.35.0-beta.0/cached-images 0
24 TestDownloadOnly/v1.35.0-beta.0/binaries 0
25 TestDownloadOnly/v1.35.0-beta.0/kubectl 0
29 TestDownloadOnlyKic 0.42
31 TestOffline 0
42 TestAddons/serial/GCPAuth/RealCredentials 0.01
49 TestAddons/parallel/Olm 0
56 TestAddons/parallel/AmdGpuDevicePlugin 0
60 TestDockerFlags 0
64 TestHyperKitDriverInstallOrUpdate 0
65 TestHyperkitDriverSkipUpgrade 0
112 TestFunctional/parallel/MySQL 0
116 TestFunctional/parallel/DockerEnv 0
117 TestFunctional/parallel/PodmanEnv 0
130 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0
131 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
132 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0
207 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL 0
211 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv 0
212 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv 0
245 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig 0
246 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
247 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS 0
261 TestGvisorAddon 0
283 TestImageBuild 0
284 TestISOImage 0
348 TestChangeNoneUser 0
351 TestScheduledStopWindows 0
353 TestSkaffold 0
379 TestStartStop/group/disable-driver-mounts 0.21
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.42s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-239277 --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:248: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-239277" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-239277
--- SKIP: TestDownloadOnlyKic (0.42s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0.01s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:819: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.01s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:543: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1093: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-094940" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-094940
--- SKIP: TestStartStop/group/disable-driver-mounts (0.21s)

                                                
                                    
Copied to clipboard